I agree strongly with this take but find it hard to convince others of it. Instead, people keep thinking there is a magic bullet to discover resulting in a lot of wasted resources and money.
Autoencoders should output these kinds of splats instead of pixel outputs and likely obtain better representations of the world at the bottleneck. These features can be used for downstream tasks.
I am interested in doing research like this. Is there any way I can be a part of it or a similar group? I have been fighting for funding from DoD for many years but to no avail so I largely have to do this research on my own time or solve my current grant's problems so that i can work on this. In my mind, this kind of research is the most interesting and important right now in the deep learning field. I am a hard worker and a high-throughput thinking... how can i get connected to otherwise with a similar mindset?
I am glad they evaluated this hypothesis using weight decay which is primarily thought of to induce a structured representation. My first thought was that the entire paper was useless if they didn't do this experiment.
I find it rather interesting that the structured representations go from sparse to full to sparse as a function of layer depth. I have noticed that applying weight decay penalty as an exponential function of layer depth gives improved results over using a global weight decay.
Not quite. For an underlying semantic concept (e.g., smiling face), you can go from a basis vector [0,1,0,...,0] to the original latent space via a single rotation. You could then induce said concept by manipulating the original latent point by traversing along that linear direction.
I think we are saying the same thing. Please correct me though where I am wrong. You could look at the maps in some way but instead of the basis being one hot dimensions (the standard basis), it could be rotated.
Why do you think something that is getting extensive investment from private sources needs philanthropy?
Also, billg has laid out the goals of his Foundation and what they aspire to achieve. Which one of those aspirations do you think should be replaced with "fundamental AI research"?
A lot of the Foundation money goes on disease research and preventative and curative vaccine and medicine development. All of those areas are already being transformed by AI as a tool, and a lot of that development happens as a result of philanthropic, government, and private investment.
AI as defined today is brittle. I am interested in understanding the fundamentals of learning so that systems can be learn as quickly as humans and have thier same level of robustness. Such a breakthrough would have a worldwide impact.
As far as I can tell, very few places are looking into this topic area. Jeff Hawkins group is one of the few places. I would like to get involved as this area is my passion but I can't connected to funding to do so.
Gates is betting very heavily on AI and thinks it will greatly improve health outcomes. Both for medical research and even primary care. You may not like it, but I'm sure he is offering grants for AI research. Not necessarily for training models, but for finding effective ways to apply models to achieve the foundation's goals. So, it's not a stupid question to ask at all.
AI as defined today is brittle. I am interested in understanding the fundamentals of learning so that systems can be learn as quickly as humans and have thier same level of robustness. Such a breakthrough would have a worldwide impact.