Thursday, January 01, 2026

Deep Learning output, from a blackbox ?

Deep learning models, particularly large neural networks like those used in computer vision or natural language processing, are generally considered "black boxes." This means that while we can observe the inputs and outputs, the internal workings—such as exactly which neuron processes which specific piece of information—are not transparently understandable.

When can this be properly understood or analysed ? Is the output from Deep Learning systems optimised or randomised ? Can we understand working of Deep Learning systems better ? Can any other digital system optimally replace this blackbox of neurons and network ? 

No comments:

Post a Comment

What is happening in Iran ?

When the Islamists overthrew Shah Reza Pehlavi and took power in 1970, the Iranians fell to religious radicalism. One of my colleagues in th...

My popular posts over the last month ..