Saturday, 31 October 2020

Sailing again!

We sold Nimrod in July 2018, with the plan to move towards chartering boats rather than owning one. We had a terrific plan for 2020, with skiing in Iran in March,  a road trip in Scandinavia followed by a two week charter with friends out of the Norwegian Lofoten Islands in June, and another cruise following the wake of Alfred Russel Wallace in Indonesia in October.


Lofoten Islands

Political tension in Iran scotched the ski-trip, and Covid-19 put paid to the other two trips.

Bruce and Anne Stewart had been planning to go to Norway with us. They have bought into a syndicate of a Seawind 1160 (sistership of Nimrod) called Antidote. They very kindly offered for us to cruise with them to bring their boat south, from Hamilton Island in the Whitsundays, part of the way back to its home port in Brisbane.

The meteorology of the Australian East coast goes like this. January to March carries the risk of cyclones. Best to stay south of the Tropic of Capricorn (Rockhampton). From April to September, South East trade winds predominate. Great for travelling north; hard to travel south. 


In October, the SE trade winds become less reliable, and some North East winds kick in, roughly 50/50 with the SE winds.



The implication of this is that cruising yachts often head south in October and November to use the northerlies to get out of the range of cyclones, which start to arrive in November and December.

This is the theory. But in practice you can't rely on this pattern. One year we waited and waited for the northerlies and they never arrived.

Bruce sensibly avoided making any promises about getting us to a specific jumping off point by a particular date. Might be Mackay, might be Yeppoon, if the northerlies cooperate.

And they did. Gentle 10-15 knot north east winds arrived on cue as we flew into Hamilton Island on October 21st, and carried us south through the southern Whitsunday Islands to Middle Percy and the Shoalwater Bay Training Area, a wonderful remote wilderness area which is occasionally closed for military exercises.


Thursday Oct 22nd. From Hamilton Island down to St Bees Island. 45 nautical miles. New buoys are helpful.


Anne




Friday Oct 23rd. An early start for the jump to Middle Percy Island. 67 nautical miles.


Bruce, lit by the sunrise.


Sunrise leaving St Bees


A spotted mackerel. It fed us for three days.


Whites Bay, Middle Percy Island.


Saturday Oct 24th, down to Island Head Creek. 48 nautical miles.


Antidote, in Island Head Creek



Drone's eye view


Off for a walk on the extensive beach.




Two nights in Island Head Creek, then a leisurely cruise down via Pearl Bay to Port Clinton. The coastline here is particularly beautiful.



Girls getting exercise





Some storm activity around


Sv Margot, a German boat whose world cruising owners had been affected by Covid travel restrictions. They lent it to their son Harald to cruise with his Peruvian fiancĂ©e Leydi. 



Oysters on the rocks. 


Some are pretty big.


You have to be careful not to collect them from protected zones.

Watching the weather, we sailed on south to Keppel Bay marina at Rosslyn Bay, near Yeppoon. A storm hit after we were safely tied up.

George and I rented a car and drove back up to the Whitsundays, while Bruce and Anne sailed out to the Great Barrier Reef.

In Airlie Beach, we chartered another Seawind 1160 'Sea Dragon' from Whitsunday Escape.


More perfect weather, and no particular place to go. Very relaxing.

We visited Stonehaven Bay, Blue Pearl Bay and Butterfly Bay.


Butterfly Bay


Butterfly Bay


Thousands of flying foxes heading for Whitsunday Island at sunset


Goanna 


Sea Eagle


Clouds from Mays Bay

Sunday, 11 October 2020

Brains and Neural Networks

There is an emerging synergy between neuroscientists studying the brain, and computer scientists trying to create AI. The computer scientists are using a technique called 'neural networks' which is based on methods used by the brain. In turn, AI ideas are inspiring a new understanding of the brain.

I should like to explore them and try to understand them.

Overview

1) Key places
2) Four key people
3) Common themes between brains and artificial neural networks.
4) Summary

Places


Alexandra House is a building in London in Queens Square. It houses the University College of London (UCL) Institute of Cognitive Neuroscience and the Gatsby Computational Neuroscience Unit. 

Several of the people at the forefront of world research into the brain and artificial intelligence work there, or have worked there in the past.


Alexandra House


Google DeepMind, 6 Pancras Square, London 


Tesla HQ, Palo Alto.


The Pioneer Building in San Francisco, housing the offices of OpenAI and Neuralink.

There are many other important places and programs.

Key People


1) Karl Friston



Professor Karl Friston is a psychiatrist, theoretical neuroscientist and expert in brain imaging. He is director of the Functional Imaging Laboratory (FIL), just up the street from Alexandra House. He and Geoffrey Hinton became close friends when Hinton worked there between 1994 and 2001.

He explains the Free-energy principle here. (Brace yourself: this is hard. I had to watch it several times.)


Friston’s free energy principle says that all life, at every scale of organization—from single cells to the human brain, with its billions of neurons—is driven by the same universal imperative, which can be reduced to a mathematical function. To be alive, he says, is to act in ways that reduce the gulf between your expectations and your sensory inputs. Or, in Fristonian terms, it is to minimize free energy. 

Free energy is the difference between the states you expect to be in and the states your sensors tell you that you are in. Or, to put it another way, when you are minimizing free energy, you are minimizing surprise.

Friston is not easy to understand. Here is a lengthy 'low-brow' explanation by a couple of fitness coaches discussing Karl Friston's Free Energy Principle and the Predictive Brain. 

'The World of AI Has Changed Forever. Dr. Karl J. Friston is the most cited neuroscientist in the world, celebrated for his work in brain imaging and physics inspired brain theory. He also happens to be the Chief Scientist at VERSES AI, working on an entirely new kind of AI called Active Inference AI, based on the Free Energy Principle (FEP) — Karl’s theory that has just been proven by researchers in Japan to explain how the brain learns.'


In a panel hosted by the Financial Times at the World Economic Forum in Davos, Switzerland, two of the biggest names in artificial intelligence — Dr. Karl Friston of VERSES AI, and Dr. Yann LeCun of Meta — discussed their aligned, yet contrasting visions for the future of AI, with Friston proclaiming, “Deep Learning is Rubbish.”


2) Geoffrey Hinton


Originally a psychologist, he has been called one of the 'godfathers of AI'. He is currently Professor of Computer science at the University of Toronto, and also works for the Google Brain project. He was the founding director of the Gatsby Computational Neuroscience Unit at University College London. Wikipedia article.


A lecture by Geoffrey Hinton on 'What are neural networks'



Andrew Ng interviews Geoffrey Hinton. This 40 minute interview is technical and difficult, but gives a good impression of Hinton's thinking.


"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview. October 2023.



3) Demis Hassabis



Demis (44) is a British polymath with a Greek father and Chinese mother. He was a child chess prodigy. Here is a quick bio and Wikipedia entry. He became a successful computer game programmer, studied Computer Science at Cambridge, and then did a PhD in cognitive neuroscience at University College London (UCL), which involved working at the Gatsby Computational Neuroscience Unit. He founded DeepMind in 2010, which was bought by Google for £400m in 2014. DeepMind now employs 700 people, of whom 400 have PhDs.



In 2016, Google DeepMind developed a neural network program to challenge Lee Sedol, the world's top expert at the game 'Go'. Unlike the IBM supercomputer 'Deep Blue' which beat the chess champion Garry Kasparov using brute computational force, the AlphaGo program was based on neural networks. The program taught itself to play Go in a few hours.

AlphaGo - The Movie | Full Documentary (1hr:30)


After this blog was originally published, Demis Hassabis announced that DeepMind had found a solution to the 50 year problem of protein folding.


AlphaFold: The making of a scientific breakthrough







4) Andrej Karpathy


Andrej Karpathy (33) is the director of artificial intelligence and Autopilot Vision at Tesla. He specializes in deep learning and computer vision. Karpathy previously interned at Google’s DeepMind. He also studied with Geoffrey Hinton at the University of Toronto.


Andrej discussing neural nets and computer vision as part of Tesla's Autonomy Day. It contains a good explanation of neural networks. He explains 'back-propagation'. His talk ends at 2:50.


AI for Full Self-Driving

On the first ever episode of The Robot Brains podcast, Pieter Abbeel sits down with Andrej Karpathy, director of AI at Tesla. Andrej is a world-leading expert when it comes to machine learning and training neural nets. In this episode he talks about what it's like working with Elon Musk, training driverless cars with machine learning and the time he had to sleep on a yoga mat at Tesla HQ.


There are, of course, many other key people at the forefront of Brain and AI research, including:


Yann LeCunthe chief AI scientist at FaceBook and Yoshua Bengio, who won the 2018 Turing Award together with Geoffrey Hinton and Yann LeCun


Andrew Ng the Director of the Stanford Artificial Intelligence Lab.


Here is an example of how far self-driving has progressed (25th April 2021).



Elon Musk interview on Full Self-Driving and AI on 1st December 2020.


Andrej Karpathy, Workshop on Autonomous Driving. 25th June 2021.


Andrej Karpathy, Tesla AI Day. 19th August 2021. 


Brains and Neural Networks



One of the beauties of Darwins theory of evolution is that it depends on the repeated application of simple principles: random mutation and natural selection.




As far as I can understand the four thinkers I discuss above, the emerging theory of brain and AI is that both feature the following:

1) An active brain, variously described as 'predictive', Bayesian, or 'an inference engine'. (Thus the brain is not simply the passive recipient of sensory inputs). Bayes' Rule is described here.

2) A mechanism of predicting future inputs into the system.

3) A mechanism for comparing the predicted inputs with the actual inputs.

4) A mechanism for sending an error signal back to the inference engine and modifying the prediction. (This is called 'back-propagation', or 'backprop'.)

5) The generation of a new prediction with the intention of having less error between the prediction and the actual inputs. (This is also described as an attempt to reduce 'Free Energy'.)

6) Multiple layers of the system, addressing low-level data up to abstract theory, or 'Model-of-the-World'.

Some of the neuroscience of this was worked out by the studies of David Hubel and Torsten Wiesel, 'Brain and Visual Perception: The Story of a 25-Year Collaboration'

With AI, the concept is explained by Andrej Karpathy in the video above, starting here through to 1:58:00.


The rate of improvement of neural networks is impressive, possibly five times as fast as the legendary Moore's Law.



These ideas are starting to be discussed in psychiatry.


From drugs to deprivation: a Bayesian framework for understanding models of psychosis. (One of the authors, Chris Frith, is a colleague of Karl Friston at the Wellcome Centre for Human Neuroimaging in Queens Square). They view perception as a “handshake” between top-down and bottom-up processing. Top-down models predict what we’re going to see, bottom-up models perceive the real world, then they meet in the middle and compare notes to calculate a prediction error.

In their model, bottom-up sensory processing involves glutamate via the AMPA receptor, and top-down sensory processing involves glutamate via the NMDA receptor. 

Dopamine codes for prediction error, and seem to represent the level of certainty or the “confidence interval” of a given prediction or perception. Serotonin, acetylcholine, and the others seem to modulate these systems.



One key theme runs through each of several brain theories — 

Optimization. 

What is optimized? Value (expected reward, expected utility) 

Or it's complement, Surprise (prediction error, expected cost). 

This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.

Summary


Through the mist, some ideas are emerging and converging that provide a way of understanding how the brain works, and also how artificial intelligence using artificial neural networks might work. As with Darwin's theory of evolution, it seems likely that it will come from the repeated exercise of principles that can apply from simple organisms up to higher animals and humans. 

These ideas are being applied in the development of autonomous cars and many other places likely to affect us in the near to medium future.