Home Artists Posts Import Register

Content

[This is a transcript with references.]

Welcome everyone to this week’s science news. Today we’ll talk about computers made of human brain cells, galaxies that are too big to exist, how the Brits prevented a global chocolate disaster, what the Milky Way’s black hole is having for dinner, how to get radioactive compounds out of water, an impossibly efficient light sensor, better lithium-air batteries, Google’s second milestone on the way to quantum computing, and of course, the telephone will ring.

An international team of scientists, centred at John Hopkins University, has published a plan to create biocomputers powered by human brain cells.

They want to grow networks of simplified human brains, called organoids, to make computer processors, an idea they call “intelligence-in-a-dish”. That sounds to me like a 5000 dollar appetizer on a west Hollywood menu which might be why they alternatively propose to call it  “organoid intelligence”, OI for short. They say a computer made from human brain cells would be faster, better, and use less energy than even the best silicon-based machines using artificial intelligence.

And they might have a point. Today’s supercomputers are without doubt powerful, but compared to the human brain, they are energetically incredibly wasteful. An adult human brain runs on about 20 watts of power. The average supercomputing cluster requires about a million times as much, 20 megawatts or more.

Another advantage of the human brain is that it learns to solve problems from very little training data. One study in 2018 showed that a human could learn to tell similar images from different ones after about 10 training samples. But an artificial intelligence required more than 10 million training samples.

Two breakthroughs have led to these hopes for organoid intelligence. First, researchers are now able to reprogram human somatic cells back into stem cells, and then coax those stem cells into growing small brain cell clumps which make up the organoids.

They’ve already made brain organoids of about 100 thousand cells. And last year, those mini-brains learned to play PONG.  That’s the computer game pong, not beer pong. Now, researchers want to scale that up to 10 million neural cells.

To make it work, they’ll need A LOT of innovating. They need a system to keep the cells alive. Arrays of microelectrodes for the organoids to communicate with each other and with silicon computers for readout. And ways to store and process the information. The researchers say these computers might one day aid research on human neurological and psychiatric disorders, such as dementia and schizophrenia, and that these mini-brains could one day be connected to mini-eyes which does not sound creepy at all.

There are some ethical issues. Can these organoids feel pain? Do they build memory? Will they be implanted in animals, making chimeras, or human-animal hybrids? Will we get to see a Centaurus playing beer pong? Someone’s got to think about that.

These are interesting times, no doubt, and while you ponder the idea of tiny lab grown brains, this is what midjourney had to say about “intelligence in a dish” which is about as far from reality as this paper.

You have probably seen the headlines. The James Webb Space Telescope seems to have discovered six galaxies that shouldn’t exist.

The news isn’t all that new. The data came out last summer and the analysis appeared on the pre-print server. It’s just that now it’s been published in Nature, so it’s kind of official, I guess.

The conundrum is that these galaxies are big, but they existed already when the universe was only between 500 million and 700 million years old. According to the standard theory of cosmology, that’s the one with dark matter, this shouldn’t be possible. In a theory with dark matter, galaxies grow slowly and very gradually by mergers of smaller galaxies.  

This figure shows how astrophysicists think this works. All the symbols here are galaxies and the larger the symbol the larger the galaxy. Time increases from the bottom up. At the beginning you have all these tiny galaxies, and then they join to increasingly larger ones. But what Webb has seen is what you could call a curious case of baby galaxy gigantism.

You can see it in this image: The red blobs appear to be six massive, densely packed galaxies. The mass of their stars is at least a billion times more than the mass of our sun and in one case, about 100 billion times more. The images were taken with the Webb’s Near Infrared Camera and then cross-checked with images from the Hubble Space Telescope, which had previously looked at the same locations on the sky.

You probably read all this in the headlines. What you might not have read is that these big early galaxies were a prediction of Modified Newtonian Dynamics, also known as MOND. I talked about this prediction in a video a year ago. What these observations do is that they falsify dark matter and support MOND.

This isn’t the first observation which has done that. It’s happened several times before that MOND made a correct prediction when dark matter didn’t. I think science writers should pay a little more attention to this, what do you think Albert?

In February last year, the European Centre for Disease Prevention and Control narrowly prevented a global outbreak of food poisoning from contaminated chocolate. They have now put out a report in which they detailed what went right.

On February 17 last year, the UK Health Security Agency raised the first alarm. They reported a suspicious cluster of 18 children who had fallen ill with salmonella infections. Seven had to be hospitalized. Once alerted, other countries began to watch out for unusual food poisoning. In the next couple of days, France began reporting cases, too. A month later, 59 cases had been reported in five European countries.

The European Centre for Disease Prevention put together an international task which used genome sequencing on the bacteria samples. It confirmed that the cases almost certainly had a common origin. Interviews with the affected families revealed that what they had in common was the consumption of Kinder chocolate eggs. They traced the cases back to a factory in Belgium in which a tank of buttermilk had become infected.

The factory was temporarily closed. And, just before easter, they issued the largest global recall of chocolate products ever. It eventually reached 130 countries.

In all, 455 people were poisoned in 17 countries, including some in the United States and Canada. Most of the affected people were children under the age of 10. They all survived.

This sounds bad, but it could have been far worse if the recall hadn’t come that quickly. The European Centre for Disease Control especially praised the Salmonella surveillance in the UK that first raised the alarm.

Hi Rishi,

A BritGPT?

Well. GPT is a neural network that’s been trained to generate language without actually knowing anything about the real world. I don’t think you need help with that.

You’re welcome.

Astronomershave discovered that the supermassive black hole at the centre of the Milky Way is about to swallow a huge cloud of dust.

They’ve been watching this dust cloud for two decades through the Keck telescopes perched on top of Mauna Kea on Hawaii and have given it the catchy name X7.

The two Keck Telescopes are 10-meter telescopes that can “see” in both the optical and infrared range. They have an extremely impressive adaptive optics system that removes distortions caused by the turbulence of Earth’s atmosphere. You can see here how the system tidies up the images.

With all that observational data the astronomers have been able to track how X7 has changed over those years. Turns out, it’s changed a lot. At first, it was sort of a comet-shaped blob with a flared tail. Now, as you can see in this image from 2021, it’s a long oblong.

This huge cloud has a mass of about 50 times that of Earth. It’s being yanked apart by the tidal forces of the black hole. Its length has now reached about 3,000 times the distance between Earth and the sun. That’s roughly double the length at the beginning of the observations. The cloud has now entered an orbital path around the black hole.

And the data crunching didn’t just show the past of X7, it also foretold its future. You can see here what the researchers think will happen. By about 2036, the cloud will get so close to the black hole that it’ll be entirely torn apart.

Of course we’re 26 thousand light years away from the centre of the Milky Way, so all this really happened during the stone age, but then they say time is an illusion anyway.

A team of Australian researchers has figured out how to capture radioactive waste from contaminated water, concentrate it, and literally bake it into minerals for safer storage.

Liquid radioactive waste is difficult to handle. Traditionally, the radioactive liquid is passed through filters packed with minerals that can attach to the contaminants. But that’s a slow and cumbersome process.

The Australian team has come up with a better way to do it. They produced a type of clay that can be added to the water. It quickly absorbs a large variety of radioactive substances and creates a mineral which can easily be filtered out of the water. The radioactive compounds are then concentrated in the mineral, but it’s an unstable compound that’ll crumble away within a few years. Not what you want to happen to radioactive stuff.

The researchers therefore heated it up to more than 1300 degrees Celsius. This produced a stable material in which the radioactive substances were even higher concentrated.

You can see here two images of the final material taken with electron microscopes. In the coloured version on the right, the red chunks are enclosed uranium.

In the final product, the radioactive substances are about 50 thousand times more concentrated than in the original wastewater. I guess that just leaves the question of what to do with it.

A research team from Argonne National Laboratory says it’s created better lithium-oxygen batteries using a solid electrolyte.

Lithium-oxygen batteries have been one of the biggest hopes on the energy market for the past decade. Traditionally, they feature a lithium anode that moves through a liquid electrolyte and combines with oxygen during discharge. The chemical reaction yields either lithium superoxide in a single electron reaction, or lithium peroxide in a two-electron reaction. The more electrons, the higher the energy density of the battery.

The Illinois team experimented with a solid electrolyte made of a ceramic polymer material. On discharge, this material makes lithium oxide, which produces four electrons. They did a bunch of chemical analyses to prove that this is actually what’s happening. If they’re right, this means the energy density of these batteries could be higher than that of conventional lithium batteries by up to a factor of four.

Better still, the reaction happens at room temperature with oxygen from the air, so it’s quite convenient. In their experiment, the researchers reached an energy density of about 685 watt-hours per kilogram, which is more than double that of most batteries in use today. The battery lasted more than 1000 cycles, at least under lab conditions.

This all sounds really good, but I’ve found news about better batteries in material design to be like news about better Alzheimer’s drugs in medicine. They don’t seem to convert to reality very well, so don’t get your hopes up too high.

Hello?

Bonjour Emanuel.

So the vineyard valley that they claimed was a meteorite crater actually turned out to be a meteorite crater.

Yes, that could be a sign of backwards causation. I’ve always wondered why meteorites always land in craters.

You’re welcome. Salut!

A team of researchers from the Netherlands has made an impossibly efficient light sensor that can monitor vital signs from across the room.

Just like you can now pay with a credit card by getting it near a checkout terminal without tapping, these new contactless devices can measure heart and lung functions without touching you.

The researchers did it with something called a large-area thin-film photodiode, that’s about 100 times thinner than a sheet of newspaper.

The photodiode counts the number of photons it converts into electrons, a quantity called external quantum efficiency. The Dutch researchers figured out how to make the diode do that so that it runs at 200 per cent efficiency, meaning it converts two electrons for every photon. Astonishing.

The secret to the photodiode’s enhanced quantum efficiency is the addition of a green light to the layers of architecture within the diode. The green light seems to rearrange the transfer and collection of electrons, allowing the diode to detect even extremely weak signals in low light. Though the researchers say they don’t fully understand it themselves.

To show that it works, the researchers shone a near infrared light onto a volunteer’s skin and then reflected it 50, 90 and 130 centimetres back to the photodiode. They demonstrated that they could measure changes in arterial blood flow which correlates with blood pressure and heart rate. The experiment also measured chest movements as a proxy for lung function.

It's been possible for some time to measure heart rates from video footage by picking up the pulsing blood flow under the skin, but with normal cameras these methods are too unreliable for clinical applications. Photodiodes have previously been used to read vital statistics while touching the skin, but this is the first time it’s been done at a distance. All that’s missing now is a twitter integration.

Google’s Quantum AI team has announced they reached their second milestone on the way to building a commercially interesting quantum computer.

Quantum computers could solve some mathematical problems much faster than conventional computers, which is why they could be useful for business. But they are prone to errors, which has been one of the chokepoints in developing the technology.

The problem comes from the nature of quantum computers. They work with entangled states which are extremely susceptible to even the slightest disturbances. This creates errors very easily. Without correcting these errors, building large quantum computers will be impossible. At the same time, however, these quantum states can’t be copied to create redundancies, and they can’t be read out without destroying them, which makes error correction very difficult.

Reducing the error rate is therefore basically the Holy Grail of the quantum world. It’s one of the six milestones Google has set for itself before quantum computing can become commercially viable, the second one after the demonstration of quantum advantage that they achieved in 2019. Now they say they’ve reached their second milestone and they just published the results of their experiments in Nature.

Over the past couple of years, several experiments have been able to correct single errors with small correction codes, but none has been successful enough to merit scaling up. Now Google has used what’s known as a surface code.

On a smaller 17-qubit processor, it resulted in 3 point 028 per cent logical errors per cycle. But on 49 qubits, that percentage went down to 2 point 914.  Doesn’t sound like a big difference but is statistically significant.

In the universe of quantum computing, this is huge news, because loosely speaking it means bigger is better. And if bigger is better this means bigger quantum computers could actually work.

But even Google says there’s still a very long way to go. Most importantly it’s not clear the code will continue to scale well if the devices get larger. And then there are the other four milestones. But whatever they’re up to next, we’ll keep you up to date, so don’t forget to subscribe.

Files

Webb Telescope sees Galaxies Too Big To Exist. Google Reaches 2nd Quantum Computing Milestone & More

Get your privacy back: Go to https://incogni.com/sabine and sign up for Incogni. First 100 subscribers get 20% off. Today we’ll talk about computers made of human brain cells, galaxies that are too big to exist, how the Brits prevented a global chocolate disaster, what the Milky Way’s black hole is having for dinner, how to get radioactive compounds out of water, an impossibly efficient light sensor, better lithium-air batteries, Google’s second milestone on the way to quantum computing, and of course, the telephone will ring. 00:00 Intro 00:32 Intelligence In A Dish 03:39 Webb Finds Galaxies Too Big To Exist 05:50 The Global Chocolate Disaster That Wasn't 08:01 Our Black Hole Is About To Swallow A Gas Cloud 09:47 New Method to Remove Radionucleotides From Water 11:15 An Impossibly Efficient Light Sensor 13:24 Better Lithium-Air Batteries 15:28 Google Reaches Error-Correction Milestone 17:52 Protect Your Privacy with Incogni 💌 Support us on Donatebox ➜ https://donorbox.org/swtg 👉 Transcript and References on Patreon ➜ https://www.patreon.com/Sabine 📩 Sign up for my weekly science newsletter. It's free! ➜ https://sabinehossenfelder.com/newsletter/ 🔗 Join this channel to get access to perks ➜ https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join

Comments

Anonymous

1. As you discuss, the one problem with using brain organoids is that as brains were evolved to develop consciousness, the computers using these organoids could be conscious and therefore would be worthy of consideration as beings. This issue, consciousness, has been discussed in the AI community for quite some time, but as semiconductor based computers have no capacity for consciousness, I doubt that AI does either. There is a huge difference between an evolved biological system that has developed responses to sensory input and an algorithm that merely reduces the error between prediction and reality. 2. re: MOND: As a cosmologist, is there anything in MOND that can explain gravitational lensing?

Anonymous

The assumption of mass-independent gravity (which was almost found by Albert in 1911) has similar results as MOND. But in contrast to MOND it can be deduced physically and it has so a foundation. And in contrast to MOND it does not have free parameters which have to be adapted.

Anonymous

So, why haven't the LambdaCDM crowd packed up their bags and moved on? Well, there are several technical aspects to estimating galaxy mass that come into play. It looks like the group used a Salpeter initial mass function to estimate galaxy mass from measured stellar luminosity. The Salpeter IMF is an empirical relation derived looking at our local universe. We know that the early universe does not follow the Salpeter IMF, but we do not yet know the correct IMF to use. A "back-of-the-envelope" calculation using the Salpeter IMF is exactly the right thing to do and write a paper on, but recognize that this is not the final answer. As the paper notes, astronomers have a conundrum already with early star formation. When we measure the ages of globular clusters in the Milky Way, we find that they are as old as the universe, which implies star formation started earlier than cosmological models predict. If I recall correctly, both LambdaCDM and MOND struggle with the early star formation problem. To the extent that early galaxy formation is tied to early star formation, there is so much we just don't know. The paper also notes that they are using a Schechter function to estimate the density distribution of galaxies. Again, this function is derived using the local universe and may not be appropriate in the early universe. Until they see fit to do a JWST deep field (analogous to the Hubble deep field) the small, low surface brightness galaxy distribution in the early universe is not actually known (hypothesized, but not known). If these big galaxies are only the 1% tip of all early galaxies, then statistically, LambdaCDM is fine. If, however, these big galaxies are "common" in the early universe, then LambdaCDM has a big uphill climb. What's really great about all of this is that our sphere of ignorance about conditions in the early universe is shrinking fast and with JWST we may finally be able to break the degeneracy over whether LambdaCDM or MOND does a better job of explaining observed phenomena in the early universe. I'm salivating right now.

Anonymous

TeVeS is the relativistic version of MOND (RMOND) that can explain gravitational lensing, but at this point in time there seems to be a lot of problems with it. For example it cannot simultaneously get galaxy rotation curves and lensing correct.

Anonymous

Howdy, Tracey! I have never heard of TeVeS, but searching wikipedia turned this up (for others wondering about it): https://en.wikipedia.org/wiki/Tensor%E2%80%93vector%E2%80%93scalar_gravity But obviously, so far, MOND hasn't progressed past those problems. I was wondering, as a complete outsider, if there were any further developments.

Anonymous

I grew up in the age of dark matter and we turned our noses up at those crazy MOND people while debating whether we would find DM as MACHOS or WIMPS. All of this MOND stuff is new to me too. One of the big criticisms of DM is that it needs to be artificially tweaked to fit observed phenomena. But, if the same level of tweaking will eventually be required of MOND, then it's no better. I don't have a horse in this race, but both horses in the race are currently lame.

Anonymous

I hadn't heard of MOND before Sabine, but had DM. I guess I see both existing together, at the same time, as just part of the process simply because reality is a bit more complicated. It's very interesting, reality shows something that we can't explain yet so there's work for scientists!

Anonymous

Hi Tracey, thanks for all the explanations. Knowing the context really helps understand the competing explanations. It sure seems like JWST and other cosmological observatories are far more likely to deliver a discovery than any particle accelerator. To me, the biggest draw of DM was the hope to find it in a laboratory setting. Aside from an occasional anomaly, like the BOAT and a handful of other spurious observations, that hope is wearing thin.

Anonymous

'LambdaCDM', yet another thing I hadn't heard of. I have cosmology books that I haven't gotten to and so much that I don't even know about. Life is too short, which explains specialization to a point.

Anonymous

Hi Jeffery, LambdaCDM is the current standard Big Bang cosmological model. Lambda is Einstein's cosmological constant (presumably caused by dark energy in the standard interpretation) and CDM is "cold" dark matter. "Cold" in this context just means that it moves much slower than the speed of light. Hot dark matter is ultra-relativistic (speeds near c) and warm dark matter has intermediate properties between the other two. LambdaCDM does a better job fitting observations and leading to the large scale structure in the universe than the other DM models, but it is deficient in many ways as well.