Papers
Topics
Authors
Recent
Search
2000 character limit reached

BRepNet: Boundary Representation Neural Architecture

Updated 26 January 2026
  • BRepNet is a neural architecture that directly processes CAD boundary representations using topological message passing for enhanced segmentation accuracy.
  • It leverages coedge-centric convolutions and customizable kernel walk templates to intricately capture relationships among faces, edges, and coedges in solid models.
  • By bypassing mesh conversion, BRepNet achieves high fidelity segmentation and analysis, offering significant improvements in CAD modeling applications.

Boundary representation neural architectures target the direct processing of solid models as encountered in Computer-Aided Design (CAD), eschewing the need for mesh or point cloud approximation. BRepNet embodies a topological message passing scheme adapted to boundary representation (B-rep) structures, enabling segmentation and analysis tasks through coedge-centric convolutions on native B-rep topologies. Its expressiveness is anchored in leveraging the full relational structure among faces, edges, and oriented coedges, affording enhanced fidelity for manifold geometric modeling (Lambourne et al., 2021).

1. Topological Entities and Data Structures

BRepNet operates on canonical B-rep entities:

  • Faces (F={f1,,fF}F = \{ f_1,\ldots,f_{|F|} \}): Surface patches.
  • Edges (E={e1,,eE}E = \{ e_1,\ldots,e_{|E|} \}): Curve segments bounding faces.
  • Oriented coedges (C={c1,,cC}C = \{ c_1,\ldots,c_{|C|} \}): Directed half-edges, each associates with a directionality along a face loop.

Each coedge cc possesses fields:

  • next(c)next(c): successor along the parent face's boundary loop.
  • mate(c)mate(c): oppositely oriented coedge on the same edge.
  • face(c),edge(c)face(c),\, edge(c): parent face and edge.

Input features are assigned via:

  • xfRpx_f \in \mathbb{R}^p for faces,
  • xeRqx_e \in \mathbb{R}^q for edges,
  • xcRrx_c \in \mathbb{R}^r for coedges.

These aggregate into feature matrices:

  • XfRF×p,XeRE×q,XcRC×rX^f \in \mathbb{R}^{|F| \times p},\, X^e \in \mathbb{R}^{|E| \times q},\, X^c \in \mathbb{R}^{|C| \times r}.

Sparse binary matrices encode B-rep topology:

  • N,P,M{0,1}C×CN, P, M \in \{0,1\}^{|C| \times |C|} for next, prev, mate permutations.
  • Einc{0,1}C×EE_{inc} \in \{0,1\}^{|C| \times |E|} and Finc{0,1}C×FF_{inc} \in \{0,1\}^{|C| \times |F|} for incidence relations.

2. Convolutional Kernel Design

BRepNet convolution centers on each coedge cic_i by constructing sets of topological walks using powers of NN, PP, MM. Three template lists steer the kernel:

  • Kc={K1c,,KKcc}K^c = \{K^c_1, \ldots, K^c_{|K^c|}\}: Walks ending on coedges.
  • Ke={K1e,,KKee}K^e = \{K^e_1, \ldots, K^e_{|K^e|}\}: Walks ending on edges.
  • Kf={K1f,,KKff}K^f = \{K^f_1, \ldots, K^f_{|K^f|}\}: Walks ending on faces.

For layer ll, hidden-state matrices Hf(l),He(l),Hc(l)H_f^{(l)}, H_e^{(l)}, H_c^{(l)} are used. Feature gathering proceeds as:

  • Ψf=[K1fHf(l)KKffHf(l)]\Psi^f = [K^f_1 H_f^{(l)} \Vert \cdots \Vert K^f_{|K^f|} H_f^{(l)}]
  • Ψe\Psi^e and Ψc\Psi^c analogous for edges and coedges,
  • All concatenated: Ψ(l)=[ΨfΨeΨc]\Psi^{(l)} = [\Psi^f \Vert \Psi^e \Vert \Psi^c].

A multilayer perceptron (MLP) applies:

  • Z(l)=σ(Ψ(l)W(l)+b(l)), Z(l)RC×3sZ^{(l)} = \sigma \left(\Psi^{(l)} W^{(l)} + b^{(l)}\right),\ Z^{(l)} \in \mathbb{R}^{|C| \times 3s}.

Splitting blockwise,

  • Z(l)=[Hc(l+1)Zf(l)Ze(l)]Z^{(l)} = [H_c^{(l+1)} \Vert Z_f^{(l)} \Vert Z_e^{(l)}],
  • Max-pooling performed over coedges incident to each face/edge for Hf(l+1),He(l+1)H_f^{(l+1)}, H_e^{(l+1)}.

3. Message Passing Dynamics

BRepNet conforms to the message passing neural network framework. At each layer:

  • For coedge cc, neighbors {hn(l)}\{h_n^{(l)}\} reached via Kc,Ke,KfK^c, K^e, K^f template walks.
  • The per-coedge message mc(l)m_c^{(l)} is computed through shared MLP M(l)M^{(l)} over concatenated neighbor states.
  • Coedge state update: hc(l+1)=mc(l)h_c^{(l+1)} = m_c^{(l)}.
  • Per-face and per-edge states update via pooled aggregation over incident coedges:

mf(l)=maxc:face(c)=f[Zf(l)(c)]m_f^{(l)} = \max_{c : face(c) = f} [Z_f^{(l)}(c)]

me(l)=maxc:edge(c)=e[Ze(l)(c)]m_e^{(l)} = \max_{c : edge(c) = e} [Z_e^{(l)}(c)]

hf(l+1)=Uf(l)(mf(l)), he(l+1)=Ue(l)(me(l))h_f^{(l+1)} = U_f^{(l)}(m_f^{(l)}),\ h_e^{(l+1)} = U_e^{(l)}(m_e^{(l)})

Where UU is typically identity or a small linear transformation.

4. Layer and Parameter Specification

A typical BRepNet instantiation consists of:

  • Number of layers TT: usually $2$ or $3$ convolution units.
  • Hidden dimensionality ss: uniform across faces, edges, coedges.
  • Each layer's MLP: two hidden layers (width $3s$), ReLU activations, output size $3s$.
  • Absence of explicit residual connections; each layer computes fresh states.
  • Final readout unit yielding class scores per face: additional convolution with MLP output dimension U|U|, followed by max-pooling over incident coedges to produce Hf(T+1)RF×UH_f^{(T+1)} \in \mathbb{R}^{|F| \times |U|} (raw class scores).
  • Training loss: cross-entropy on per-face labels.

5. Forward Pass: Pseudocode Workflow

The BRepNet forward pass executes sequentially:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Xf  ℝ^{|F|×p}, Xe  ℝ^{|E|×q}, Xc  ℝ^{|C|×r}
Topology: N, P, M, E_inc, F_inc
Kernels: Kf = {Kf_i}, Ke = {Ke_j}, Kc = {Kc_k}
Layers: T
Hidden dimension: s
MLP parameters: {Θ^(0), , Θ^(T)}

Hf^(0)  Xf
He^(0)  Xe
Hc^(0)  Xc

for l in 0 to T1:
    # Fetch & concatenate
    Ψf  [Kf_1 Hf^(l)    Kf_|Kf| Hf^(l)]
    Ψe  [Ke_1 He^(l)    Ke_|Ke| He^(l)]
    Ψc  [Kc_1 Hc^(l)    Kc_|Kc| Hc^(l)]
    Ψ  [Ψf  Ψe  Ψc]

    # Linear + ReLU
    Z  ReLU(Ψ W^(l) + b^(l))
    Split Z into [Hc^(l+1), Zf, Ze]

    # Pool to faces and edges
    for each face f_k  F:
        Hf^(l+1)[k]  max { Zf[i] : face(c_i)=f_k }
    for each edge e_j  E:
        He^(l+1)[j]  max { Ze[i] : edge(c_i)=e_j }

Ψ  fetch-and-concat using Hf^(T), He^(T), Hc^(T)
Z  ReLU(Ψ W^(T) + b^(T))
for each face f_k  F and channel u:
    H_out^f[k,u]  max { Z[i,u] : face(c_i)=f_k }
return H_out^f  ℝ^{|F|×|U|}

6. Implications and Applicational Context

BRepNet directly consumes B-rep data structures, preserving topological and parametric fidelity. This avoids lossy conversion to mesh or point-cloud representations and supports tasks such as per-face segmentation with higher accuracy compared to mesh- and point-based networks. BRepNet also introduces structural flexibility through customizable kernel walk templates, capturing multi-entity patterns in B-reps (Lambourne et al., 2021). The release of the Fusion 360 Gallery segmentation dataset—over 35,000 B-rep models annotated by modeling operations per face—serves as a benchmark and resource for further research on B-rep-sensitive neural architectures.

7. Dataset and Evaluation Outcomes

BRepNet demonstrates superior segmentation accuracy on the Fusion 360 Gallery dataset, outperforming mesh and point cloud-based networks in aligning predicted regions with underlying modeling operations. A plausible implication is that native B-rep message passing yields more semantically relevant predictions for CAD-centric tasks, enhancing downstream modeling, annotation, and analysis pipelines in solid geometry domains (Lambourne et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to BRepNet Architecture.