Thursday, January 01, 2026

Deep Learning output, from a blackbox ?

Deep learning models, particularly large neural networks like those used in computer vision or natural language processing, are generally considered "black boxes." This means that while we can observe the inputs and outputs, the internal workings—such as exactly which neuron processes which specific piece of information—are not transparently understandable.

When can this be properly understood or analysed ? Is the output from Deep Learning systems optimised or randomised ? Can we understand working of Deep Learning systems better ? Can any other digital system optimally replace this blackbox of neurons and network ? 

No comments:

Post a Comment

Making of an entrepreneur

My popular posts over the last month ..