Skip to content

The Data Scientist

Building Neural Networks

A Breakthrough Method for Building Neural Networks to Enhance AI Transparency

This streamlined approach clarifies how neural networks generate their outputs, enhancing transparency

A change in how artificial neurons work could make AIs easier to interpret.

Artificial neurons—the foundational units of deep neural networks—have seen minimal change over the past several decades.. While powering modern AI, they are often hard to decipher.

Building Neural Networks

Current artificial neurons in models like GPT-4 process inputs internally. Networks of these neurons can be difficult to understand.

The new method simplifies neurons by moving some complexity outside. These new neurons simply sum inputs and produce outputs, without extra internal operations. Networks using these are called Kolmogorov-Arnold Networks (KANs).

This simplification, studied by MIT-led researchers, could help explain neural network outputs, verify decisions, and detect bias. Early evidence suggests KANs may become more accurate than traditional networks as they grow.

Andrew Wilson from New York University calls it “interesting work” that rethinks network design.

KANs were first proposed in the 1990s, but the MIT team has advanced the concept, showing how to build and train larger KANs and demonstrating their interpretability.

The work is gaining attention, with GitHub pages showing KAN applications in image recognition and fluid dynamics.

The breakthrough came as researchers tried to understand standard neural networks.

Most AI systems, including large language models and image recognition, use multilayer perceptrons (MLPs). These have artificial neurons arranged in interconnected layers, each with an activation function that transforms inputs into outputs.

In MLPs, neurons receive inputs from the previous layer, multiply them by weights, sum them, and pass them through the activation function. The network learns by adjusting these weights. However, MLPs are difficult to interpret, even for simple tasks with synthetic data.

“If we can’t interpret synthetic datasets generated by neural networks, then interpreting real-world data is an even greater challenge,” says Liu. “We wanted to change the architecture.”

The team’s main change was replacing the fixed activation function with a simpler, learnable function outside each neuron. During training, the network learns to represent these simple functions rather than individual weights.

In a recent ArXiv paper, Liu and colleagues showed that these external functions are easier to interpret, allowing reconstruction of the entire network’s mathematical form.

This approach, called Kolmogorov-Arnold Networks (KANs), simplifies neurons by moving complexity outside. Each neuron now simply sums inputs and produces outputs without extra internal operations. This simplification could help explain network outputs, verify decisions, and detect bias.

Early evidence suggests KANs may become more accurate than traditional networks as they grow, potentially revolutionising AI interpretability.

KANs have only been tested on simple, synthetic datasets, not real-world problems like image recognition. Liu notes they are “slowly pushing the boundary” of interpretability.

The team has shown KANs become more accurate with size faster than MLPs, particularly for science-related tasks. However, it’s unclear if this extends to standard machine learning tasks.

KANs have a drawback: they require more time and computing power to train than MLPs. This limits their use on large-scale datasets and complex tasks. Di Zhang suggests more efficient algorithms and hardware accelerators could help address this issue.

AI’s inner workings: A fresh approach to neural network design

A novel method for constructing neural networks could shed light on AI’s decision-making process.

This streamlined technique offers clearer insights into how neural networks generate their outputs.

A subtle shift in the functioning of artificial neurons within neural networks might make AI systems more transparent.

Artificial neurons, the essential building blocks of deep neural networks, have stayed largely unchanged for decades. While these networks power modern AI, they are often viewed as enigmatic black boxes.

Current artificial neurons, employed in large language models such as GPT-4, operate by processing numerous inputs, summing them up, and then transforming this sum into an output using an additional mathematical operation within the neuron. Neural networks comprise combinations of these neurons, and their collective behaviour can be challenging to interpret.

This new method of combining neurons, however, operates in a slightly different way. It reduces some of the complexity within existing neurons and shifts it outside the neuron itself. Internally, these new neurons simply aggregate their inputs and produce an output, without the need for the extra hidden operation. Networks built from such neurons are termed Kolmogorov-Arnold Networks (KANs), named after the Russian mathematicians who inspired their creation.

This simplification, thoroughly researched by a team led by MIT experts, could enhance our understanding of why neural networks generate specific outputs, support decision verification, and even allow for the detection of bias. Early evidence also suggests that as KANs grow in size, their accuracy improves more rapidly than networks constructed from traditional neurons.

“It’s intriguing research,” notes Andrew Wilson, who studies machine learning fundamentals at New York University. “It’s inspiring to witness people rethinking and reshaping the core design of these [networks].”

The basic principles of KANs were actually proposed in the 1990s, with researchers continuing to build simple versions of such networks. However, the MIT-led team has advanced this concept further, demonstrating how to construct and train larger KANs, conducting empirical tests, and analysing some KANs to show how their problem-solving approach could be interpreted by humans. “We’ve breathed new life into this idea,” said team member Ziming Liu, a PhD student in Max Tegmark’s lab at MIT. “And, hopefully, with the improved interpretability… we [may] no longer [need to] view neural networks as black boxes.”

Although still in its early stages, the team’s work on KANs is attracting significant attention. GitHub pages have surfaced, demonstrating the use of KANs for a range of applications, from image recognition to addressing fluid dynamics challenges.

 Building Neural Networks

Uncovering the formula

The current breakthrough occurred when Liu and colleagues at MIT, Caltech, and other institutions were attempting to decipher the inner workings of standard artificial neural networks.

Unveiling the inner workings

In today’s AI landscape, most systems, including those powering large language models and image recognition, rely on a structure called a multilayer perceptron (MLP). MLPs consist of artificial neurons arranged in tightly connected ‘layers’. Each neuron contains an ‘activation function’ – a mathematical operation that transforms multiple inputs into an output in a predefined way.

In an MLP, every artificial neuron receives inputs from all neurons in the previous layer, multiplying each input by a corresponding ‘weight’ – a number representing that input’s importance. These weighted inputs are summed up and fed into the neuron’s activation function, generating an output for the next layer. MLPs learn to differentiate between, say, cat and dog images by fine-tuning these input weights across all neurons. Importantly, the activation function remains constant during training.

Once trained, the entire network of neurons and their connections essentially becomes a complex function. It takes an input (like thousands of image pixels) and produces a desired output (such as 0 for cat, 1 for dog). Understanding this function’s mathematical form is crucial for explaining why it produces certain outputs – for instance, why it deems someone creditworthy based on their financial data. However, MLPs are notoriously opaque, making it nearly impossible to reverse-engineer complex networks like those used in image recognition.

Even when Liu and his team attempted to reverse-engineer MLPs for simpler tasks using custom ‘synthetic’ data, they faced significant challenges.

Liu explains, “If we find it challenging to interpret these synthetic datasets from neural networks, addressing real-world datasets feels almost impossible.” We found it incredibly difficult to understand these neural networks, prompting us to rethink the architecture.”

Charting the mathematics

The main innovation was replacing the fixed activation function with a simpler, learnable function that transforms each input before it enters the neuron.

Unlike the MLP neuron’s activation function, which processes multiple inputs, each simple function outside the KAN neuron handles just one number, transforming it into another. During training, instead of adjusting individual weights as in MLPs, the KAN learns how to represent each simple function. In a paper published on the ArXiv preprint server this year, Liu and colleagues demonstrated that these simpler functions outside the neurons are far easier to interpret, enabling the reconstruction of the mathematical form of the function being learned by the entire KAN.

The team has only tested KANs’ interpretability on simple, synthetic datasets, not complex real-world problems like image recognition. “We’re gradually pushing the boundaries,” Liu notes. “Interpretability can be quite challenging.”

Liu’s group has also shown that KANs become more accurate with size faster than MLPs, both theoretically and empirically for science-related tasks (such as approximating physics functions). “It’s unclear if this will apply to standard machine learning tasks, but it seems promising for science-related ones,” Liu remarks.

Liu acknowledges that KANs come with a notable drawback: they demand more time and computing power to train compared to MLPs.

“The application efficiency of KANs on large-scale datasets and complex tasks is limited,” explains Di Zhang from Xi’an Jiaotong-Liverpool University in Suzhou, China. However, he suggests that more efficient algorithms and hardware accelerators could help address this issue.