Items in eScholarship@BC will redirect to URBC, Boston College Libraries' new repository platform. eScholarship@BC is being retired in the summer of 2025. Any material submitted after April 15th, 2025, and all theses and dissertations from Spring semester 2025, will be added to URBC only.
The space of parameter vectors for a feedforward ReLU neural networks with any fixed architecture is a high dimensional Euclidean space being used to represent the associated class of functions. However, there exist well-known global symmetries and extra poorly-understood hidden symmetries which do not change the neural network function computed by network with different parameter settings. This makes the true dimension of the space of function to be less than the number of parameters. In this thesis, we are interested in the structure of hidden symmetries for neural networks with various parameter settings, and particular neural networks with architecture \((1,n,1)\). For this class of architectures, we fully categorize the insufficiency of local functional dimension coming from activation patterns and give a complete list of combinatorial criteria guaranteeing a parameter setting admits no hidden symmetries coming from slopes of piecewise linear functions in the parameter space. Furthermore, we compute the probability that these hidden symmetries arise, which is rather small compared to the difference between functional dimension and number of parameters. This suggests the existence of other hidden symmetries. We investigate two mechanisms to explain this phenomenon better. Moreover, we motivate and define the notion of \(\varepsilon\)-effective activation regions and \(\varepsilon\)-effective functional dimension. We also experimentally estimate the difference between \(\varepsilon\)-effective functional dimension and true functional dimension for various parameter settings and different \(\varepsilon\).