BRepNet: Boundary Representation Neural Architecture
- BRepNet is a neural architecture that directly processes CAD boundary representations using topological message passing for enhanced segmentation accuracy.
- It leverages coedge-centric convolutions and customizable kernel walk templates to intricately capture relationships among faces, edges, and coedges in solid models.
- By bypassing mesh conversion, BRepNet achieves high fidelity segmentation and analysis, offering significant improvements in CAD modeling applications.
Boundary representation neural architectures target the direct processing of solid models as encountered in Computer-Aided Design (CAD), eschewing the need for mesh or point cloud approximation. BRepNet embodies a topological message passing scheme adapted to boundary representation (B-rep) structures, enabling segmentation and analysis tasks through coedge-centric convolutions on native B-rep topologies. Its expressiveness is anchored in leveraging the full relational structure among faces, edges, and oriented coedges, affording enhanced fidelity for manifold geometric modeling (Lambourne et al., 2021).
1. Topological Entities and Data Structures
BRepNet operates on canonical B-rep entities:
- Faces (): Surface patches.
- Edges (): Curve segments bounding faces.
- Oriented coedges (): Directed half-edges, each associates with a directionality along a face loop.
Each coedge possesses fields:
- : successor along the parent face's boundary loop.
- : oppositely oriented coedge on the same edge.
- : parent face and edge.
Input features are assigned via:
- for faces,
- for edges,
- for coedges.
These aggregate into feature matrices:
- .
Sparse binary matrices encode B-rep topology:
- for next, prev, mate permutations.
- and for incidence relations.
2. Convolutional Kernel Design
BRepNet convolution centers on each coedge by constructing sets of topological walks using powers of , , . Three template lists steer the kernel:
- : Walks ending on coedges.
- : Walks ending on edges.
- : Walks ending on faces.
For layer , hidden-state matrices are used. Feature gathering proceeds as:
- and analogous for edges and coedges,
- All concatenated: .
A multilayer perceptron (MLP) applies:
- .
Splitting blockwise,
- ,
- Max-pooling performed over coedges incident to each face/edge for .
3. Message Passing Dynamics
BRepNet conforms to the message passing neural network framework. At each layer:
- For coedge , neighbors reached via template walks.
- The per-coedge message is computed through shared MLP over concatenated neighbor states.
- Coedge state update: .
- Per-face and per-edge states update via pooled aggregation over incident coedges:
Where is typically identity or a small linear transformation.
4. Layer and Parameter Specification
A typical BRepNet instantiation consists of:
- Number of layers : usually $2$ or $3$ convolution units.
- Hidden dimensionality : uniform across faces, edges, coedges.
- Each layer's MLP: two hidden layers (width $3s$), ReLU activations, output size $3s$.
- Absence of explicit residual connections; each layer computes fresh states.
- Final readout unit yielding class scores per face: additional convolution with MLP output dimension , followed by max-pooling over incident coedges to produce (raw class scores).
- Training loss: cross-entropy on per-face labels.
5. Forward Pass: Pseudocode Workflow
The BRepNet forward pass executes sequentially:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
Xf ∈ ℝ^{|F|×p}, Xe ∈ ℝ^{|E|×q}, Xc ∈ ℝ^{|C|×r} Topology: N, P, M, E_inc, F_inc Kernels: Kf = {Kf_i}, Ke = {Ke_j}, Kc = {Kc_k} Layers: T Hidden dimension: s MLP parameters: {Θ^(0), …, Θ^(T)} Hf^(0) ← Xf He^(0) ← Xe Hc^(0) ← Xc for l in 0 to T–1: # Fetch & concatenate Ψf ← [Kf_1 Hf^(l) ‖ … ‖ Kf_|Kf| Hf^(l)] Ψe ← [Ke_1 He^(l) ‖ … ‖ Ke_|Ke| He^(l)] Ψc ← [Kc_1 Hc^(l) ‖ … ‖ Kc_|Kc| Hc^(l)] Ψ ← [Ψf ‖ Ψe ‖ Ψc] # Linear + ReLU Z ← ReLU(Ψ W^(l) + b^(l)) Split Z into [Hc^(l+1), Zf, Ze] # Pool to faces and edges for each face f_k ∈ F: Hf^(l+1)[k] ← max { Zf[i] : face(c_i)=f_k } for each edge e_j ∈ E: He^(l+1)[j] ← max { Ze[i] : edge(c_i)=e_j } Ψ ← fetch-and-concat using Hf^(T), He^(T), Hc^(T) Z ← ReLU(Ψ W^(T) + b^(T)) for each face f_k ∈ F and channel u: H_out^f[k,u] ← max { Z[i,u] : face(c_i)=f_k } return H_out^f ∈ ℝ^{|F|×|U|} |
6. Implications and Applicational Context
BRepNet directly consumes B-rep data structures, preserving topological and parametric fidelity. This avoids lossy conversion to mesh or point-cloud representations and supports tasks such as per-face segmentation with higher accuracy compared to mesh- and point-based networks. BRepNet also introduces structural flexibility through customizable kernel walk templates, capturing multi-entity patterns in B-reps (Lambourne et al., 2021). The release of the Fusion 360 Gallery segmentation dataset—over 35,000 B-rep models annotated by modeling operations per face—serves as a benchmark and resource for further research on B-rep-sensitive neural architectures.
7. Dataset and Evaluation Outcomes
BRepNet demonstrates superior segmentation accuracy on the Fusion 360 Gallery dataset, outperforming mesh and point cloud-based networks in aligning predicted regions with underlying modeling operations. A plausible implication is that native B-rep message passing yields more semantically relevant predictions for CAD-centric tasks, enhancing downstream modeling, annotation, and analysis pipelines in solid geometry domains (Lambourne et al., 2021).