Dec. 12 () –
Researchers at Kyushu University have developed a new method to understand how deep neural networks interpret information and classify it into groups.
Deep neural networks are a type of artificial intelligence (AI) that mimics the way human brains process information, but Understanding how these networks “think” has been a challenge for a long time.
Published in IEEE Transactions on Neural Networks and Learning Systemsthe new study addresses the important need to ensure that AI systems are accurate and robust and can meet the standards required for safe use.
Deep neural networks They process information in many layers, similar to how humans solve a puzzle step by step. The first layer, known as the input layer, incorporates the raw data. Later layers, called hidden layers, analyze the information. The first hidden layers focus on basic features, such as detecting edges or textures, such as examining individual puzzle pieces. Deeper hidden layers combine these features to recognize more complex patterns, such as identifying a cat or dog, similar to connecting pieces of a puzzle to reveal the big picture.
“However, these hidden layers are like a closed black box: we see the entrance and exit, but it is not clear what happens inside,” he says. in a statement Danilo Vasconcellos Vargas, associate professor at the Faculty of Information Sciences and Electrical Engineering, Kyushu University. “This lack of transparency becomes a serious problem when AI makes mistakes, sometimes caused by something as small as changing a single pixel. AI can appear intelligent, but understanding how it arrives at its decision is key to ensuring it is reliable.”
Currently, methods for visualizing how AI organizes information are based on simplifying high-dimensional data into 2D or 3D representations. These methods allow researchers to observe how AI classifies data points (for example, grouping images of cats near other cats and separating them from dogs). However, this simplification has critical limitations.
“When we simplify high-dimensional information into fewer dimensions, it’s like flattening a 3D object into 2D: we lose important details and can’t see the full picture. Additionally, this method of visualizing how data is grouped makes it difficult to compare between different networks. neural networks or classes of data,” explains Vargas.
In this study, the researchers developed a new method, called the k-star distribution method, that more clearly visualizes and evaluates how well deep neural networks They classify related elements.
The model works by assigning each entered data point a “k star value” indicating the distance to the nearest unrelated data point. A high k star value means the data point is well separated (e.g., a cat far from any dog), while a low k star value suggests possible overlap (e.g., a dog closer to a cat than to any dog). other cats). By looking at all data points within a class, such as cats, this approach produces a distribution of star k values that provides a detailed picture of how the data is organized.
“Our method preserves the higher dimensional space, so no information is lost. It is the first and only model that can provide an accurate view of the ‘local neighborhood’ around each data point,” Vargas emphasizes.
Using their method, the researchers revealed that deep neural networks classify data into clustered, fractured, or overlapping arrangements. In a grouped arrangement, similar items (e.g. cats) are grouped closely together, while unrelated items (e.g. dogs) are clearly separated, meaning the AI can classify the data well. However, patchy distributions indicate that similar elements are dispersed over a wide space, while overlapping distributions occur when unrelated elements are found in the same space; both distributions make classification errors more likely.
LIKE IN A WAREHOUSE
Vargas compares this to a warehouse system: “In a well-organized warehouse, similar items are stored together, making retrieval easy and efficient. If items are intermingled, they become harder to find, which increases the risk of selecting the wrong item.”
AI is increasingly used in critical systems such as autonomous vehicles and medical diagnostics, where accuracy and reliability are essential. The k-star distribution method helps researchers, and even policymakers, evaluate how AI organizes and classifies information, pointing out potential weaknesses or errors. This not only supports the legalization processes necessary to safely integrate AI into daily life, but also provides valuable insights into how AI “thinks.” By identifying the root causes of errors, researchers can refine AI systems so that they are not only accurate but also robust, capable of handling blurry or incomplete data and adapting to unexpected conditions.
“Our ultimate goal is to create AI systems that maintain accuracy and reliability, even when faced with the challenges of real-world scenarios,” concludes Vargas.
Add Comment