Back to blogging after a long hiatus. The current pandemic has provided all of us ample downtime to reflect and regroup. One of the main tasks for me on the backburner has been an evergrowing reading list, which I have resolved to make a dent in. A key current topic of interest has been “Whither AI?” and what may change in the post-COVID world in terms of areas of focus. A good book to see where we may be headed – is the one mentioned above from MIT Press by Brian Cantwell Smith. I provide a quick summary and share some interesting discussion points below.
The book introduces some key terminology:
- Registering – The ability to “identify”, reify and create a mapping of things in one’s mental model about entities in the realworld. You can only understand, reason with “things” that you register.
- Reckoning in the World – Once registered, the kinds of “reasoning” – aka representational manipulation of symbols/signs in ones mental map – that may help one navigate the realworld
- Judgement – The higher level of understanding and awareness of what has been registered, what are its limitations and benefits, What has been left out etc. In a sense, all modeling of the realworld involves some judgement.
The book makes the case that classical AI or (GOFAI – Good old Fashioned AI) failed because it did not account for the dirtiness of representations – due to faulty registration. One assumed a world that was “clean” symbolically for reckoning. So any variations from the clean world led to failure. Secondly, the book suggests that the world of ML driven AI is a bit more robust as it “focuses” more on registering the analog world and as a machinery does not “manipulate” symbols per se. So the belief is it is less fragile. However, the author concludes that neither of these approaches will lead to an intelligent entity. Both these approaches lead to building systems that do not (really) KNOW what they are doing. I say “KNOW” in all senses of knowing something about the world around us.
We are far from engineering such an intelligent entity. Towards this end, few criteria for constructing such an entity are laid out including – self-awareness (including knowing what one knows and does not know), consciousness, judgment, emotion, ethics, commitment, responsibility for actions/thoughts. It is also suggested that some combination of classical and ML-based AI is required to move forward but how to mechanize/model judgment, address the issue of context dependency and how much of the world should be registered apriori are all open issues. All the “intelligence” currently is the “intelligence” exhibited behavorially and actions interpreted by us humans working in concert with a contemporary AI system.
Anyways, reflecting on the key themes in the book, a few points are important to remember (and reiterating some that I have addressed in an earlier blog entry)
- The ML/Statistical version of AI also makes ontological commitments even when it is registering the world – be it a pixel or a soundwave or a binary UTF-8 character – though the constructs may come from the world of physics! When we decide what to instrument and measure and how to interpret the results of an ML algorithm – we are doing the same thing as GOFAI – nothing different. You have just pushed the problem somewhere else. So I believe all the problems with GOFAI will occur here too at a different scale!
- As far as basic primitives for intelligence go – in AI or cognitive science – we still do not have a sense of – Is what we have enough? The notion that intelligence can be activated by flipping a switch – something inert suddenly wakes up- may not be the model to shoot for – however much we base it on our biological understanding of the mind/brain. Is human intelligence physically situated in the brain? We do not know. Given our limitations of understanding of intelligence – may be our focus should be restricted to engineering intelligence – without layering in the anthropomorphic notions. This would keep things far more practical and objective both for scientists and engineers.
- Finally, from a business/tech perspective – it is much clearer that there is no free lunch! Much more work – what we are doing now – needs to be done at scale to digitally transform the world. We still need to build data models, workflow models, communication models, embed adaptivity that we understand etc. and do a better job at it. All systems are going to be quite fragile for a long time to come till we design them better – wherein the task of designing – registers the right things from the world for a system to work with. Registering still will be a human activity! We will live in a world of reckoning systems reflecting judgements of their designers.