Differences between brain and CPU
History / Edit / PDF / EPUB / BIB / 2 min read (~373 words)What are the differences between a brain and a CPU?
- The brain is extremely parallel (each neuron processing many signals), while CPUs are currently limited to a few cores.
- The brain appears to be able to only do a single thing at once (single process, single thread).
- CPUs can explicitly control their memory access while the brain memory organization and access is unclear.
- The brain is a lot slower in terms of sequential operations, processing at a maximum of 250-1000 Hz while current generation (2020) desktop CPUs are in the 3-5 GHz range.
- The brain does not have a clear instruction set.
- The brain consumes glucose for energy, while a CPU consumes electricity.
- The human brain is much larger (average 1273 cm3 for men, 1131 cm3 for women) than a CPU chip (Intel Core i7-10710U is 46mm by 24mm (height unknown but definitely less than 10 mm) which is less than 11 cm3).
- Heat dissipation is done through cerebral circulation in the brain and through a heatsink attached to a CPU.
- The brain is biodegradable, the CPU is not.
- Signal is transmitted between neurons using neurotransmitters (chemically) while CPUs transmit signals between transistors electrically.
- The organization of the brain evolves over time (in a single person), while a CPU chip will remain the same its whole life.
- We currently cannot transplant a brain from one person to another, but we can transfer a CPU from one computer to another (as long as the motherboard is compatible).
- The brain contains a large amount of memory, while the CPU has a small amount of memory and relies on larger memory stores (RAM, disks).
- It is possible to reverse engineer a CPU by trying a different combination of inputs and recording the output (immutable). Doing the same with a part of the brain may result in different results as the brain is mutable.
- The brain may not have different levels of memory cache (we do however talk about short and long term memory).
What do humans modeled as computers look like?
- Numerous processes all running in parallel in different regions of the body and the brain (heartbeat, breathing, sight, smell, taste).
- The brain runs multiple processes at once, each processing a different modal input (sight, taste, touch, hearing, smell).
- Those processes are buffered and a process takes care of synchronizing the different input streams to create a coherent flow of information.
- The spinal cord and nerves are network cables, transferring information from the limbs and other regions of the body to the main processing unit, the brain.
- The eyes are digital cameras that can see into the world, converting photons into bits of data.
- The ears are microphones that can listen to a limited range of frequencies (20-20000 Hz).
- The mouth act as a speaker to emit sound for others to perceive
- Touch is complex as it deals with textures, temperatures, moisture and pressures, however it can likely be modeled as a surface with discrete elements that measure a few things such as the force currently applied on it, the temperature, moisture.
- Taste and smell are also complex as they are specialized receptors that will perceive different fragrances based on the distribution of particles that are perceived and that can be recognized.
- The arms, legs, hands, feet are actuators used to interact with the environment.
- The stomach and intestine are the power supply.
- Neurons throughout the body act as distributed memory and storage, as well as processing units.
- Blood is used as a mechanism to transfer energy between components. It also acts as a heatsink for the brain.
Answering your own questions
History / Edit / PDF / EPUB / BIB / 2 min read (~398 words)Given a continuously growing number of questions one asks himself, what is the proper procedure to answer these questions?
There is no proper procedure per se. The most important is to get started. Write your questions down so that you have a list. When you have a new question, you can check if you have asked a similar question in the past.
In their book Algorithms to live by, Brian Christian and Tom Griffiths write (about scheduling)
In a thrashing state, you’re making essentially no progress, so even doing tasks in the wrong order is better than doing nothing at all. Instead of answering the most important emails first — which requires an assessment of the whole picture that may take longer than the work itself — maybe you should sidestep that quadratic-time quicksand by just answering the emails in random order, or in whatever order they happen to appear on-screen.
A lot of what I wrote in How do you prioritize things when there are so many of them competing against one another? and How can I organize all the webpages I never read? applies here, namely:
- Record your questions in a single location that you can search
- Do not spend time answering questions you don't care about the answer
- Record under such question that you decided not to answer them due to a lack of value
- Prioritize the questions you would like to have an answer to
- Evaluate how valuable is a question's answer to you, and how much time you would be willing to spend to answer it
- Look online for existing answers
- Write down the answer to the question you asked yourself, you may ask the same question again in the future
- Answer one question per day
I often joke around with colleagues at work that it's easier to generate dumb questions than it is to answer them appropriately. As such, don't spend your time on questions that are not worth answering.
If you dedicate a bit of your time every day to answer one question you asked yourself, you will slowly accumulate a large list of questions you've spent time thinking about and answering. Those answers may be useful to others, so make sure that you share as many of your answers with others.
Storing and updating large amounts of data
History / Edit / PDF / EPUB / BIB / 5 min read (~817 words)How can an agent efficiently store terabytes of data, with hundreds of gigabytes updated daily?
This question comes from the idea that if we want to implement an artificial intelligence, it will have to be able to process a large amount of data daily, similar to how we need to process a stream of sensorial (sight, hearing, taste, touch, smell) inputs actively more than 12 hours per day.
In human beings, even though we perceive a large amount of incoming data, a lot of it is compressed through differencing, that is, comparing the previous input with the new input and only storing the difference. This is similar to how video is currently encoded and compressed. To be able to accomplish this feat we however need two things: a temporary buffer to store the previous input (or sequence of inputs), and a mechanism to differentiate between the previous and the current input.
That differentiation mechanism can be highly complex depending on the degree of compression desired. For example, if you shift all the pixels in an image by 1 on the x-axis, your differentiation mechanism may simply tell you that all the pixels have changed, return a delta between their previous value and new value and be done. In some cases you may be lucky and a large number of pixels have remained the same value. However, a much better differentiation mechanism would realize that everything has moved by one pixel on the x-axis and instead return to you that it detected a x+=1 transform, which compresses the transformation a lot more than by the simple pixel by pixel difference. In the case of the brain, one benefit it has is that it can correlate multiple input channels to make sense of what is happening and better compress the information. In the previous case, the eyes may perceive that all the signals are now different at each receptor. The brain however also receives information from the ears, telling it that the head moved by a certain amount, which most likely explains the transform that was applied to the eyes input.
In the brain we make use of the fact that the sensory inputs are different modes. Each is compressed somewhat independently from the others. As such, we would expect information that is similar in format to be compressed together (text with text, video with video, audio with audio, etc.) as it is likely to lead to the highest compression. Furthermore, being able to make use of the structure within the data will lead to better compression than simply compressing blindly a collection of inputs as an opaque blob.
I would expect such a compression system to make use of two types of compression: offline and online. Offline compression would occur during low periods of activity and would be able to offer higher levels of compression at the cost of less responsiveness during recompression. Online compression would occur when the system is actively being used and would rely mostly on fast encoding techniques to keep responsiveness high.
Online compression would rely on a lookup dictionary with a most used recently retention policy to compress blocks of data that have already been seen numerous times. The quality of the online compression highly depends on the assumption that what will be observed in the future is highly likely to be like what has been observed in the past. During the day, we spend most of our time in the same environment. As such, we experience and observe the same things for an extended amount of time. Being able to determine what is similar and what is different is what will lead to the highest amount of compression.
Offline compression would rely on the ability to make the most efficient use of the compute and memory available as this process would be time-constrained. It might be possible that the online and offline systems share information such that the online compressor can let the offline compressor know regions of data that might be ripe for recompression. In the case that both systems do not communicate, the offline system would likely benefit from knowing which regions have already been compressed to the fullest so that it spends most of its time processing data that was recently added. When it is done with this step, it can then attempt to increase the compression efficiency of all the data stored. Here again it should be able to make use of the differencing approach given that days will likely be highly similar. As such, we would expect the amount of space necessary to store a day to decrease drastically as more and more days of data are observed, possibly to the point where new days of data can be expressed as segments of previous days entirely.
Are passive or active agents more intelligent?
A passive agent is an agent that simply does its thing but does not interact with the environment.
An active agent is an agent that actively interact with the environment.
Given those two definitions, we expect the active agent to appear more intelligent because it behaves according to its environment and interacts with it. A passive agent may however also behave according to its environment, it just doesn't try to alter it.
Is an agent that never says anything necessarily dumb? Such agent could be hiding all the information of the world within itself, and could potentially solve any problem thrown at him, but it simply does not offer such answers because it doesn't interact with the world. The relationship between the agent and the world is one-sided, from the environment (on)to the agent. From the outside, the agent looks like an inanimate object that doesn't know anything nor can it do anything. But if you are able to peek inside, you can observe the most complex processes occurring. I would suggest that such agent is highly intelligent.
In the stock market, we say that an investor is active if they regularly manage their portfolio, while a passive investor is one that manages their portfolio less frequently, depends on indexes instead of individual stocks and prefer to rely on the trend of stocks to make a profit. Active management is often compared to passive management as a benchmark, that is to say, you should not get involved in active management if your strategy cannot beat a simpler passive strategy. It is often the case that we see active investors as being foolish and more likely to lose money than passive investors.