News Logo
Article Summaries
SCIENCE
Futuristic, 'alien-like' nuclear fusion rockets developed in total secret could revolutionize space travel — if they actually work
Source: livescience    Published: 2025-03-20 17:02:22

A U.K. start-up has shocked the space exploration community after unveiling plans to use a novel nuclear fusion propulsion system to power an orbital fleet of reusable "alien-like" rockets, known as Sunbirds, which the company says could revolutionize how we explore the solar system — and beyond. The technology behind this ambitious project will begin testing this year and could make it into space by 2027, Richard Dinan , the founder and CEO of Pulsar Fusion, which is making the rockets, told Live Science. However, the company has set no timeline for when the futuristic spacecraft could become a reality. One expert told Live Science it could be at least a decade away, if not more. Pulsar Fusion, which also makes traditional plasma thrusters and is developing nuclear fission engines, first announced the Sunbird project on March 6 after developing the concept in "complete secrecy" over the last decade, according to a statement emailed to Live Science. The project was then fully revealed to the public on March 11 at the Space-Comm Expo in London's ExCel center. In theory, the proposed rockets will be stored in massive orbital satellite docks before being deployed and attached to other spacecraft and rapidly propelling them to their destinations like giant "space tugs," which would massively reduce the cost of long-haul space missions. A concept video shows how the futuristic rockets could be used to transport a larger spacecraft to Mars and back using docking stations at both ends of the journey (see below). Related: How do space rockets work without air? Sunbird rockets could act as "space tugs" that attach to spacecraft in low-Earth orbit and propel them out of our planet's gravity well. (Image credit: Pulsar Fusion) The Sunbirds' core technology is the Duel Direct Fusion Drive (DDFD) engines, which the company claims will harness the elusive power of nuclear fusion and, hypothetically, provide exhaust speeds much higher than those currently possible. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors If it works, this could cut the potential journey time to Mars in half and allow probes to reach Pluto in 4 years, according to Pulsar Fusion. (The current record for a trip to Pluto is 9.5 years, which was set by NASA's New Horizons spacecraft in 2015.) "If we are going to be the species that actually get to other planets, then exhaust speeds are pretty much the most important thing," Dinan said during a talk at Space-Comm Expo. "In terms of what can be [theoretically] produced in exhaust speeds, fusion is king." Pulsar Fusion Sunbird - Migratory Transfer Vehicle - YouTube Watch On Fusion in space On Earth, using nuclear fusion as a source of near-limitless energy is still likely decades away , which at first glance makes the idea of fusion rockets seem like pure science fiction. However, the opposite is true because "the bar is lower for fusion in space," Dinan told Live Science in an interview at Space-Comm Expo. That's because the proposed reaction needed in space is different from what physicists are attempting on Earth. In traditional nuclear fusion reactors, known as tokamaks, the goal is to fuse deuterium and tritium — both heavy isotopes, or versions, of hydrogen — in order to emit a constant stream of neutrons, which generates heat (and in turn energy), as well as breeding more fuel for the continued reaction. However, the planned fuel for the DDFD is deuterium and helium-3 , an extremely rare isotope of helium with one less neutron than the dominant form. In this case, the reaction would pump out protons, and their charge can be used for direct propulsion. Additionally, the proposed reaction would only need to last for short periods at a time, similar to the timescales already achieved on Earth . The shape and scale of the reactor are also important. Tokamaks are large doughnut-shaped chambers that must mimic the vacuum of space and withstand sustained temperatures equivalent to the surface of the sun. To do so, they use extremely powerful electromagnets to confine plasma into a constant loop . But the DDFD is a linear reactor that does not need to fully constrain the plasma within. In space, there is also a natural vacuum and temperatures reaching absolute zero , which will prevent the reactor from overheating. The Sunbirds' DDFD would fuse deuterium and helium-3 in an altered and potentially easier-to-achieve version of nuclear fusion. (Image credit: Pulsar Fusion) However, the designs of the DDFD are still a closely guarded secret and have not yet been properly tested, so their exact workings and feasibility are unclear. Dinan said he understood why people might be initially skeptical of the feasibility of fusion in space but added that when people look at it logically it starts to make a lot of sense. "This is in every way achievable," he added. "If we can do fusion on Earth, we can definitely do fusion in space." But not everyone agrees that it will be so easy. "I'm skeptical," Paulo Lozano , an astronautics professor at MIT who specializes in rocket propulsions, told Live Science in an email. "Fusion is tricky and has been tricky for many reasons and for a long time, especially in compact devices." However, without seeing the full Sunbird designs, he added that he has "no technical basis to judge." Sunbirds are go If Pulsar can master the DDFD, the plan is to use the resulting Sunbirds as "space tugs" that can propel any spacecraft from low-Earth orbit (LEO) further into space — largely because fusion is not a viable or safe way of launching rockets directly from Earth's surface. So rather than having to build giant rockets with massive thrusters in order to completely escape Earth's gravity, as SpaceX's temperamental Starship rocket does, sunbirds will allow any spacecraft that makes it into LEO to escape our planet's pull. This would make missions to the moon, Mars and beyond much more feasible — and cheaper, Dinan said. Pulsar also envisions the Sunbirds acting as a battery that can power the systems of any spacecraft it is attached to during the journey. Although this is not the primary goal. Related: 10 times space missions went very wrong in 2024 Another big draw of the Sunbirds is that they would only require small amounts of fuel and could be easily refilled and recharged while they are "perched" on their orbital docking stations, potentially making them much more reusable than most other propulsion systems, Dinan said. In theory, multiple docking stations could be constructed around the solar system, allowing for faster return journeys to Earth as well. (Image credit: Pulsar Fusion) The Sunbirds will likely be around 100 feet (30 meters) long and were described as having a "distinctive alien-like design" in the initial press release. This is due to thick "tank-like" armor plating that will hopefully allow them to survive being bombarded with cosmic radiation and micrometeorites in space, which is why they look "super weird," Dinan said. Each Sunbird could cost upwards of $90 million (70 million British pounds) to produce, largely because of how expensive helium-3 is to obtain, Dinan estimated. However, the amount of money that these rockets could save a potential client means they would be well worth the cost, he added. "If I can get them there quicker, they will pay for it." In the future, helium-3 could be mined from regolith on the moon, which would be much cheaper than trying to produce it on Earth, Lozano said. But this is not currently part of Pulsar's plans. Next steps Pulsar will conduct the first static tests of the DDFD engine this year inside a pair of giant vacuum chambers recently constructed at the company's campus in Bletchley, England. These chambers are the largest of their kind in the U.K. and possibly the largest in Europe, Dinan said. These initial tests won't use helium-3 because it is too expensive to obtain for use in a prototype, meaning that true fusion will not be achieved. Instead, an "inert gas" will be used in its place to test how the engine could theoretically work, Dinan said. Next, Pulsar Fusion plans to undergo an orbital demonstration for some of the "key technological components" in 2027, he added. However, Dinan didn't clarify what this will entail. If the upcoming tests are successful, Pulsar will begin to raise funds to build a full-scale Sunbird prototype and begin trying to achieve true fusion using helium-3. However, Dinan says that there is no timeline for creating the first Sunbird prototype and it is "too speculative" to predict when this may happen. Lozano "optimistically" predicts that a fully operational Sunbird prototype is at least a decade away but added that physicists often joke that "fusion is 20 years in the future and always will be."



Biological secrets of world's oldest woman, Maria Branyas Morera, revealed after death
Source: livescience    Published: 2025-03-20 16:30:00

Maria Branyas Morera was 117 when she died in August 2024 — but aspects of her biology looked much younger, new research finds. The study could help reveal key factors that help some individuals ward off disease and survive to extremely old ages, scientists say. Before her death in a nursing home in Catalonia, Spain, Branyas held the record for the world's oldest living person for about a year and a half. Now, a study of urine, blood, stool and saliva samples collected from Branyas in the last year of her life reveals she had a number of factors that potentially protected her against disease. These include genes associated with immune function, fantastic cholesterol levels, and a high level of inflammation-fighting bacteria in her gut. The study was posted Feb. 25 to the preprint server bioRxiv and has not yet been peer-reviewed. Related: Extreme longevity: The secret to living longer may be hiding with nuns... and jellyfish "One of the goals of the study was to see and find an explanation for this separation between extreme longevity and being very old, but at the same time not having the diseases of the old," study lead author Manel Esteller , a cancer epigeneticist at the Josep Carreras Institute in Spain, told Live Science. Notably, however, not all researchers are convinced that studying supercentenarians — people ages 110 or older — is a fruitful method of understanding longevity. That's partly because the actual ages of these individuals have been called into question. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors The biology of longevity According to the Guinness Book of World Records , one entity that validates old-age records, Branyas was born in San Francisco in 1907 and lived in Texas and Louisiana before moving to Spain in 1915 with her Spanish-born parents. Other than hearing loss and mobility issues, she remained healthy and cognitively sharp until death. Esteller and his colleagues investigated Branyas' genes, immune cells, blood levels of lipids, and proteins in her tissues, comparing her results to those of younger individuals who had undergone similar testing. For example, they compared Branyas' genetic results to those of 75 other Iberian women in the 1000 Genomes Project , an effort to map variation in the human genome. This comparison revealed seven rare genetic variants in Branyas' genome that had never been detected in European populations. These variants, or distinct versions of genes, were related to cognitive function, immune function, lung function, heart disease, cancer and autoimmune disorders. They may have protected against these diseases and improved organ function, the scientists suggested. They also found that Branyas had excellent mitochondrial function, meaning the powerhouses that provide cells energy worked better than those of younger women. She also had healthy cholesterol levels and a high production of proteins that are beneficial for immune function. And based on her stool samples, her gut microbiome was distinct from that of 61- to 91-year olds previously studied . In particular, she showed a high level of actinobacteria, which typically decline in old age. Bacteria of the genus Bifidobacterium, which are known to excrete anti-inflammatory compounds, were especially prevalent. This contrasts the "typical decline of this bacterial genus in older individuals," the study authors noted. "She had this bacteria in the gut that protected against inflammation and she had this bacteria for two reasons," Esteller theorized. "The genome was very welcoming of the population, but [it was] also due to her food." Branyas reported eating three yogurts a day, he said; fermented foods like yogurt contain probiotics , or living microorganisms that can replenish and maintain the gut microbiome . Maria Branyas Morera as a child (dressed in white), pictured with her family in in New Orleans in 1911. (Image credit: Unknown author, Public domain, via Wikimedia Commons) A molecular clock Another intriguing finding was a schism between the molecular markers of aging in Branyas' body and her chronological age. When people age, structures at the ends of their chromosomes, called telomeres, become progressively shorter. Telomeres help prevent DNA from fraying, which would contribute to cellular aging and cancer. As expected for someone of an extreme age, Branyas' telomeres were almost nonexistent, Esteller said. She also had a large population of a particular type of immune cell, which is typical in older people. In these two ways, Branyas' biology looked very old — but another marker of aging on her DNA looked strangely young, the team found. Related: Worldwide, the life-span gap between the sexes is shrinking As a person ages, DNA accumulates a bunch of molecular tags on its surface, called methyl groups. The methylation of DNA can act like a "clock," showing how physiologically aged a person is. Branyas' clock looked like that of someone between age 100 and 110, about a decade younger than she was at death. In that respect, "her cells still feel like they were centenarian cells," Esteller said. What does the study tell us about aging? An accumulation of many little genetic benefits and lifestyle choices may enable extreme longevity, Esteller concluded. Given the study's findings, "maybe we can think about interventions now," he said, including potential drugs to increase life span. But there may be a caveat to this research and other studies like it: the ages of the subjects it focuses on. The validation of extreme old age is controversial. For example, in 1997, the oldest person to have ever lived, Jeanne Calment of France, died, and her age was validated by longevity organizations and the Guinness Book of World Records at 122 years old. But critics have since cast doubt on the veracity of that claim, suggesting Calment actually died in 1934 at the age 59 . They contend that her daughter, Yvonne, took on her identity to evade taxes — and in doing so, she inadvertently became the purported oldest person ever. (If these critics are right, the woman who died in 1997 was actually only 99.) Another study , which is currently under peer review, argues that the problems with old-age validation go far beyond Calment. This research, first released as a preprint in 2019, suggests that regions with the highest reported proportions of extremely old residents are disproportionately poor and unhealthy . "It doesn't make sense that this level of poverty would predict good health at any age," said Saul Newman , a scholar at the Oxford Institution of Population Aging and co-author of that research. What does predict high numbers of very old people, Newman found, is poor record-keeping. For example, U.S. states established birth certificate systems at different times, and the number of people ages 110 and older drops by an estimated 69% to 82% after that record-keeping improves. Often, people born before such documentation was de rigueur might not even know their true ages, Newman told Live Science. In poor regions, people might also have been motivated to tack years onto their age or take on the identity of a deceased relative to receive a pension. In Branyas' case, she was born a little less than two years after statewide birth certificates came to California in July 1905. Esteller and colleagues relied on the work of age-verification organizations to validate Branyas' age and did not have direct access to her documents. When asked, a representative for the Guinness Book of World records provided Live Science with general information on the organization's methods. "For age-related record titles, the guidelines include requests for government issued documents and further proof to substantiate the claim," the representative wrote in an email to Live Science. "Exact information on these guidelines is only available to applicants and/ or legal representation of them." The hazy nature of old-age records makes interpreting research on the oldest of the old difficult, Newman said. That Branyas' epigenetic clock suggests she was between 100 and 110 could indeed suggest that she was a 117-year-old who aged unusually slowly — or it could suggest that her paperwork was wrong, and she was between 100 and 110 when she died, he said. "How do you distinguish between those two cases?" he said. "That’s the central problem. You don't know." On the other hand, Branyas did undeniably reach old age in enviable health, even surviving a bout of COVID-19 in 2020. Thus, her biology might still help researchers distinguish between changes associated with healthy aging and changes associated with disease. "For the first time you have biomarkers that can tell you your age, but other biomarkers that can tell you your pathology," Esteller said. "And these are two different things."



'I was astonished': Ancient galaxy discovered by James Webb telescope contains the oldest oxygen scientists have ever seen
Source: livescience    Published: 2025-03-20 14:55:25

Astronomers have found oxygen in the most distant known galaxy, upending assumptions about how quickly galaxies matured. Named JADES-GS-z14-0 , the galaxy where the record-breaking detection was made formed at least 290 million years after the Big Bang and was first spotted by the James Webb Space Telescope (JWST) in 2024. Heavy elements like oxygen are forged in the nuclear fires of stars. As the newfound oxygen existed when the universe was just 2% of its present age, this primordial element is a major head-scratcher for astronomers because it suggests that stars in the early universe were born and died to seed their surroundings with heavy elements much faster than previously expected. The findings, made by two different research teams, were published March 20 in two papers in the journals Astronomy & Astrophysics and The Astrophysical Journal . "It is like finding an adolescent where you would only expect babies," Sander Schouws , a researcher at Leiden University in the Netherlands and lead author of the second study, said in a statement . "The results show the galaxy has formed very rapidly and is also maturing rapidly, adding to a growing body of evidence that the formation of galaxies happens much faster than was expected." The earliest oxygen Astronomers aren't certain when the first globules of stars began to clump into the galaxies we see today, but cosmologists previously estimated that the process began slowly within the first few hundred million years after the Big Bang . The detection of JADES-GS-z14-0 and other galaxies like it, however, turned this assumption on its head . The light detected by JWST's Near Infrared Spectrograph originated in an enormous halo of young stars surrounding the galaxy's core that were burning for at least 90 million years before its observation . Related: James Webb telescope confirms there is something seriously wrong with our understanding of the universe Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors Young stars are typically composed of hydrogen and helium, and they fuse them into heavier elements, like oxygen, as they grow old and scatter them throughout their host galaxies upon the stars' violent deaths. At the roughly 300 million-year mark where we can see JADES-GS-z14-0, astronomers expected the universe to still be too young to be rife with heavy elements. But after pointing the Atacama Large Millimeter/submillimeter Array (ALMA) telescope in Chile's Atacama Desert at the distant galaxy, the researchers were stunned by what they found: JADES-GS-z14-0 had roughly 10 times more oxygen than they expected. "I was astonished by the unexpected results because they opened a new view on the first phases of galaxy evolution," Stefano Carniani , an astronomer at the Scuola Normale Superiore of Pisa in Italy and lead author of the first paper, said in the statement. "The evidence that a galaxy is already mature in the infant universe raises questions about when and how galaxies formed."



Smallest human relative ever found may have been devoured by a leopard 2 million years ago
Source: livescience    Published: 2025-03-20 14:35:28

One of the smallest human relatives ever found has been unearthed in South Africa. Standing just 3 feet, 4.5 inches (1.03 meters) tall, the adult Paranthropus robustus, who died 2 million years ago, is even shorter than the famously diminutive " Lucy " from Ethiopia and the mysterious group of tiny " hobbits " from Indonesia — but researchers aren't sure why. "These small early hominin individuals are reconstructed as shorter and stockier than modern human 'pygmies'," or groups of people with an average male height under 4 feet, 11 inches inches (150 centimeters), study lead author Travis Pickering , a paleoanthropologist at the University of Wisconsin–Madison, told Live Science in an email. The newly uncovered individual, designated SWT1/HR-2, "was probably similarly built — short and stocky," he said. The leg bones of this species are rarely found, so the new finds also provide clues about how P. robustus walked. The researchers described their findings in the April issue of the Journal of Human Evolution . The team retrieved chunks of sedimentary rock dated between 1.7 million and 2.3 million years old from the Swartkrans limestone cave, which is located in South Africa's Cradle of Humankind — a region that encompasses 180 square miles (470 square kilometers) and includes more than a dozen major fossil sites. When researchers began to excavate the blocks in the lab, they discovered three connecting bones — the left hip, femur and tibia — all from one young adult hominin. Based on the shape of the bones, the researchers think that this individual was a young adult female from the species P. robustus, also known as a robust australopithecine due to the large size of its teeth and face. But very few fossils of the body of P. robustus have ever been found, making the new discovery important for understanding what they looked like and how they moved. Related: 1.4 million-year-old jaw that was 'a bit weird for Homo' turns out to be from never-before-seen human relative "She was certainly robust in the pelvis and at the hip joint," Pickering said. "However, her leg bones are not as remarkable in this regard — and this is one of the quizzical things about the fossils." Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors Taken together, the robust hip bones coupled with the more slender leg bones show that this P. robustus moved through the landscape on two feet but also likely climbed trees in search of food or to evade predators, the researchers said in the study. Close examination of the bones also revealed the probable cause of this young female's death: A leopard devoured her. Leopards tend to hang out in trees near cave openings and prefer to pounce on prey that weighs around 55 pounds (25 kilograms), according to the researchers. This tiny P. robustus, which was found inside a cave, probably weighed about 60.4 pounds (27.4 kg). Carnivore tooth marks were also found on the hominin's bones, offering further clues about the cause of death. Other fossils at the site also have puncture marks that match leopard teeth, study co-author C.K. Brain has argued in past research . A fossil leopard lower jaw next to a juvenile P. robustus skull fragment from Swartkrans. (Image credit: Jason L. Heaton) Size is a mystery Although the leg bones provide important new evidence for what life was like for P. robustus, the researchers are still unsure why this species was so small. There is currently no evidence that the species was affected by insular dwarfism , study co-author Jason Heaton , a paleoanthropologist at the University of Alabama at Birmingham, told Live Science in an email. That process — in which a species evolves to be smaller over time due to competition for resources — may be why Homo floresiensis, also called the hobbits , were very short. Rather, for P. robustus, "it may reflect natural variation within the species, population-level differences, or environmental influences such as nutrition or developmental constraints," Heaton said. More research into the body size of P. robustus is needed, the researchers noted in their study, and further excavation at Swartkrans could yield more bones from the same individual. "I think that there is a good chance that much more of the SWT1/HR-2 skeleton will be recovered," Pickering said, "especially if we are correct and she was killed and eaten by a leopard, since leopards do not generally consume bones." Human evolution quiz: What do you know about Homo sapiens?



University of Texas at Arlington on The Conversation
Source: theconversation    Published: 2025-03-20 14:30:16

As the largest university in North Texas and second largest in The University of Texas System, UTA is located in the heart of Dallas-Fort Worth, challenging our students to engage with the world around them in ways that make a measurable impact. UTA offers state-of-the-art facilities that encourage students to be critical thinkers. Through academic, internship, and research programs, our students receive real-world experiences that help them contribute to their community and, ultimately, the world. We have more than 180 baccalaureate, master’s, and doctoral degree programs, and more than 41,000 students walking our campus or engaging in online coursework each year.



Our new study indicates maternal exposure to relatively low fluoride levels may affect intelligence in children
Source: theconversation    Published: 2025-03-20 14:14:23

Fluoride occurs naturally in drinking water, especially well water, but the concentrations are generally low in public water supplies. In some countries, such as the US, Canada, UK, Australia and Ireland, fluoride is commonly added to the public water supply at around 0.7mg per litre to prevent tooth decay. The World Health Organization guideline for fluoride in drinking water is 1.5mg per litre. Given the concern that fluoride in drinking water might affect children’s intelligence, the addition of this mineral to drinking water has become controversial. Consensus among researchers about the precise nature of the link between fluoridation and intelligence is lacking and the existing evidence is widely debated. The US National Toxicology Program’s, part of the Department of Health and Human Services, most recent evaluation states with moderate confidence that higher fluoride exposure (above the World Health Organization guideline) is consistently associated with decreased child intelligence, while they conclude that more research is needed to understand the effects at lower fluoride exposure levels. Read more: Fluoride: very high levels in water associated with cognitive impairment in children A new study my colleagues and I conducted found that relatively low exposure to fluoride during the foetal stage (as a result of the mother’s exposure to fluoride) or in the child’s early years may affect their intelligence. For the study, which was published in Environmental Health Perspectives, we followed 500 mothers and their children in rural Bangladesh, where fluoride occurs naturally in the drinking water, to investigate the link between early life exposure to fluoride and children’s intelligence. Psychologists evaluated the children’s cognitive abilities at five and ten years of age, using standard IQ tests. The exposure to fluoride in the mothers during pregnancy and children at five and ten years of age was determined by measuring the concentrations in urine samples. Urine samples reflect the continuing exposure from all sources, such as drinking water, food and dental products (such as toothpaste and mouthwash). Urine samples are the most accurate way of determining fluoride exposure in people. Increasing urinary concentrations of fluoride in pregnant women were linked to decreasing intelligence in their children at five and ten. Even the lowest fluoride concentrations were associated with decreases in the children’s cognition. The average maternal urinary fluoride concentration was 0.63mg per litre, with the vast majority of concentrations falling between 0.26 and 1.4mg per litre. Imago / Alamy Stock Photo The children’s average urinary fluoride concentrations at five and ten years of age (0.62 and 0.66mg per litre, respectively) were similar to those of their mothers during pregnancy. Among children who had more than 0.72mg per litre of fluoride in their urine by age ten, increasing urinary fluoride concentrations were associated with lower intelligence. In children with less fluoride in their urine, there were no consistent associations with their intelligence. So childhood exposure seemed to be less detrimental than the exposure during early foetal development. Out of the cognitive abilities measured, associations of both maternal and child urinary fluoride concentrations were most pronounced with nonverbal reasoning and verbal abilities. There were no consistent differences between boys and girls. We didn’t find a link between fluoride concentrations in the urine of the five-year-olds and their intelligence. This could be due to the shorter exposure time or that urinary fluoride concentrations aren’t as reliable in younger children owing to greater variations in how much fluoride is taken up and stored in the body, particularly in the bones. As well as the children’s urinary fluoride concentration, the fluoride concentrations in drinking water were measured at the age of ten for a random subset of the studied children. The average was 0.20mg per litre, which is well below the WHO guideline value for fluoride in drinking water. The concentrations in drinking water tracked with the concentrations in urine, confirming that water is a main source of exposure. Still, we couldn’t exclude the possibility that there were contributions from other sources. Fluoride in toothpaste is important for preventing tooth decay, but it’s important to encourage small children not to swallow the toothpaste during brushing. Limitations A limitation of our study is that we measured fluoride only in one urine sample at each time point. As a large fraction of the absorbed fluoride is excreted in some hours, one measurement may give uncertain levels for the individual. However, as the exposure largely comes from water it can be assumed that the intake is rather constant over time. Another limitation is that the intelligence tests that were used have not been standardised for the Bangladeshi population. As a result, we did not convert the results to IQ scores (with an average of 100) that can be compared across populations. Our findings support previous well-designed studies from Canada and Mexico, where exposure levels obtained below the existing WHO guideline for fluoride in drinking water were associated with impaired cognitive development. Similar findings were recently provided when combining multiple studies from several countries. It was noted that at low exposure levels, findings with cognitive development were more conclusive among studies estimating fluoride exposure via urine than among studies that relied on concentrations in drinking water only. This highlights that imprecise estimation of the exposure can lead to difficulties in assessing the true impact on cognitive development. Taken together, the concern about the effect of fluoride on children’s intelligence at low exposure levels is further strengthened by our study. In particular exposure during foetal development, but also prolonged childhood exposure seems to be of concern. Still, as this is an observational study, no firm conclusions can be drawn about causalities. There is still a need for more well-designed research studies on low-level fluoride exposure and cognitive development, in combination with experimental studies to determine the possible molecular mechanisms driving it. Collectively, this will create a robust basis for reviewing fluoride health risks and thresholds for drinking water, foods, and dental care products, especially for children.



Scientists edge closer to creating super accurate, chip-sized atomic clock that can fit into your smartphone
Source: livescience    Published: 2025-03-20 13:15:00

A new comb-like computer chip could be the key to equipping drones, smartphones and autonomous vehicles with military-grade positioning technology that was previously confined to space agencies and research labs. Scientists have developed a "microcomb chip" — a 5 millimeter (0.2 inches) wide computer chip equipped with tiny teeth like those on a comb — that could make optical atomic clocks, the most precise timekeeping pieces on the planet, small and practical enough for real-world use. This could mean GPS-equipped systems a thousand times more accurate than the best we have today, improving everything from smartphone and drone navigation to seismic monitoring and geological surveys, the researchers said in a statement . They published their findings Feb. 19 in the journal Nature Photonics . Up and atom "Today's atomic clocks enable GPS systems with a positional accuracy of a few meters [where 1 meter is 3.3 feet]. With an optical atomic clock, you may achieve a precision of just a few centimeters [where 1 centimeter is 0.4 inches]," study co-author Minghao Qi , professor of electrical and computer engineering at Purdue University, said in the statement. Related: How long is a second? "This improves the autonomy of vehicles, and all electronic systems based on positioning. An optical atomic clock can also detect minimal changes in latitude on the Earth's surface and can be used for monitoring, for example, volcanic activity." There are approximately 400 high-precision atomic clocks worldwide, which use the principles of quantum mechanics to keep time. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors This typically involves using microwaves to stimulate atoms to shift between energy states. These shifts, called oscillations, happen naturally at an extremely high rate, acting like an ultra-precise ticking clock that keeps timekeeping accurate to within a billionth of a second . That is why atomic clocks form the backbone of Coordinated Universal Time (UTC) — which is used to set global time zones — and GPS (global positioning system) satellites, which rely on atomic timekeeping to provide positioning data to cars, smartphones and other devices. Despite this incredible accuracy, traditional atomic clocks are far less accurate than optical atomic clocks. Where standard atomic clocks use microwave frequencies to excite atoms, optical atomic clocks use laser light, enabling them to measure atomic vibrations at a much finer scale — making them thousands of times more precise. Until now, optical atomic clocks have been confined to extremely limited scientific and research environments, such as NASA’s Goddard Space Flight Center and the National Institute of Standards and Technology (NIST) . This is because they are extremely complex, putting them well out of reach of your standard Casio fan. Tapping into the teeth of a comb Microcomb chips could change this by bridging the gap between high-frequency optical signals (which optical atomic clocks use) and the radio frequencies used in the navigation and communication systems that modern electronics rely on. "Like the teeth of a comb, a microcomb consists of a spectrum of evenly distributed light frequencies. Optical atomic clocks can be built by locking a microcomb tooth to a ultra-narrow-linewidth laser, which in turn locks to an atomic transition with extremely high frequency stability," the researchers explained in the statement. They likened the new system to a set of gears, where a tiny, fast-spinning gear (the optical frequency) drives a larger, slower one (the radio frequency). Just as gears transfer motion while reducing speed, the microcomb acts as a converter that changes the ultra-fast oscillations of atoms into a stable time signal that electronics can process.



Debate over H-1B visas shines spotlight on US tech worker shortages
Source: theconversation    Published: 2025-03-20 13:01:47

A heated debate has recently erupted between two groups of supporters of President Donald Trump. The dispute concerns the H-1B visa system, the program that allows U.S. employers to hire skilled foreign workers in specialty occupations – mostly in the tech industry. On the one hand, there are people like Donald Trump’s former strategist Steve Bannon, who has called the H-1B program a “total and complete scam.” On the other, there are tech tycoons like Elon Musk who think skilled foreign workers are crucial to the U.S. tech sector. The H-1B visa program is subject to an annual limit of new visas it can issue, which sits at 65,000 per fiscal year. There is also an additional annual quota of 20,000 H-1B visas for highly skilled international students who have a proven ability to succeed academically in the United States. The H-1B program is the primary vehicle for international graduate students at U.S. universities to stay and work in the United States after graduation. At Rice University, where I work, much of STEM research is carried out by international graduate students. The same goes for most American research-intensive universities. As a computer science professor – and an immigrant – who studies the interaction between computing and society, I believe the debate over H-1B overlooks some important questions: Why does the U.S. rely so heavily on foreign workers for the tech industry, and why is it not able to develop a homegrown tech workforce? The US as a global talent magnet The U.S. has been a magnet for global scientific talent since before World War II. Many of the scientists who helped develop the atomic bomb were European refugees. After World War II, U.S. policies such as the Fulbright Program expanded opportunities for international educational exchange. Attracting international students to the U.S. has had positive results. Among Americans who have won the Nobel Prize in chemistry, medicine or physics since 2000, 40% have been immigrants. Tech industry giants Apple, Amazon, Facebook and Google were all founded by first- or second-generation immigrants. Furthermore, immigrants have founded more than half of the nation’s billion-dollar startups since 2018. Stemming the inflow of students Restricting foreign graduate students’ path to U.S. employment, as some prominent Trump supporters have called for, could significantly reduce the number of international graduate students in U.S. universities. About 80% of graduate students in American computer science and engineering programs – roughly 18,000 students in 2023 – are international students. The loss of international doctoral students would significantly diminish the research capability of graduate programs in science and engineering. After all, doctoral students, supervised by principal investigators, carry out the bulk of research in science and engineering in U.S. universities. It must be emphasized that international students make a significant contribution to U.S. research output. For example, scientists born outside the U.S. played key roles in the development of the Pfizer and Moderna COVID-19 vaccines. So making the U.S. less attractive to international graduate students in science and engineering would hurt U.S. research competitiveness. Computing Ph.D. graduates are in high demand. The economy needs them, so the lack of an adequate domestic pipeline seems puzzling. Where have US students gone? So, why is there such a reliance on foreign students for U.S. science and engineering? And why hasn’t America created an adequate pipeline of U.S.-born students for its technical workforce? After discussions with many colleagues, I have found that there are simply not enough qualified domestic doctoral applicants to fill the needs of their doctoral programs. In 2023, for example, U.S. computer science doctoral programs admitted about 3,400 new students, 63% of whom were foreign. It seems as if the doctoral career track is simply not attractive enough to many U.S. undergrad computer science students. But why? The top annual salary in Silicon Valley for new computer science graduates can reach US$115,000. Bachelor’s degree holders in computing from Rice University have told me that until recently – before economic uncertainty shook the industry – they were getting starting annual salaries as high as $150,000 in Silicon Valley. Doctoral students in research universities, in contrast, do not receive a salary. Instead, they get a stipend. These vary slightly from school to school, but they typically pay less than $40,000 annually. The opportunity cost of pursuing a doctorate is, thus, up to $100,000 per year. And obtaining a doctorate typically takes six years. So, pursuing a doctorate is not an economically viable decision for many Americans. The reality is that a doctoral degree opens new career options to its holder, but most bachelor’s degree holders do not see beyond the economics. Yet academic computing research is crucial to the success of Silicon Valley. A 2016 analysis of the information technology sectors with a large economic impact shows that academic research plays an instrumental role in their development. Why so little? The U.S. is locked in a cold war with China focused mostly on technological dominance. So maintaining its research-and-development edge is in the national interest. Yet the U.S. has declined to make the requisite investment in research. For example, the National Science Foundation’s annual budget for computer and information science and engineering is around $1 billion. In contrast, annual research-and-development expenses for Alphabet, Google’s parent company, have been close to $50 billion for the past decade. Universities are paying doctoral students so little because they cannot afford to pay more. But instead of acknowledging the existence of this problem and trying to address it, the U.S. has found a way to meet its academic research needs by recruiting and admitting international students. The steady stream of highly qualified international applicants has allowed the U.S. to ignore the inadequacy of the domestic doctoral pipeline. The current debate about the H-1B visa system provides the U.S. with an opportunity for introspection. Yet the news from Washington, D.C., about massive budget cuts coming to the National Science Foundation seems to suggest the federal government is about to take an acute problem and turn it into a crisis.



Trump administration seeks to starve libraries and museums of funding by shuttering this little-known agency
Source: theconversation    Published: 2025-03-20 12:49:52

On March 14, 2025, the Trump administration issued an executive order that called for the dismantling of seven federal agencies “to the maximum extent consistent with applicable law.” They ranged from the United States Agency for Global Media, which oversees Voice of America, to the Minority Business Development Agency. The Institute of Museum and Library Services was also on the list. Congress created the IMLS in 1996 through the Museum and Library Services Act. The law merged the Institute of Museum Services, which was established in 1976, with the Library Programs Office of the Department of Education. By combining these two departments, Congress sought to create an overarching agency that could more cohesively and strategically support American museums and libraries. The agency’s mission, programs and funding have been reaffirmed through subsequent legislation, such as the Museum and Library Services Act of 2003. The Conversation U.S. interviewed Devon Akmon, who is the director of the MSU Museum at Michigan State University. He explained how the agency supports the nation’s cultural institutions and local communities – and what could be lost if the agency were dissolved. What does the Institute of Museum and Library Services do? The agency provides financial support to a wide array of cultural and educational institutions, including art, science and history museums, zoos, aquariums, botanical gardens and historic sites. Libraries of all types – public, academic, school and research – also benefit from the agency’s funding. Through grants, research and policy initiatives, the IMLS helps these institutions better serve their communities. In the 2019 fiscal year, for example, the IMLS awarded funds to libraries in Nebraska to support economic development in 30 rural communities. The project created rotating “innovation studios” in local libraries and provided residents with tools, instructional materials and programming to foster entrepreneurship and creativity. More recently, IMLS awarded a grant to the Hands On Children’s Museum to develop a toolkit that museums across the country can use to support families with relatives who are in prison. For libraries, the IMLS might fund technology upgrades, such as virtual reality learning stations, AI-assisted research aids or digitization of rare books. The agency also pays for community programs that take place in libraries, from early childhood reading initiatives to workshops that help people land jobs. How has the Institute of Museum and Library Services supported your work at the MSU Museum? IMLS grants have played a vital role in enabling the MSU Museum to preserve, enhance and expand access to its collections. For example, we’ve used IMLS grants to develop high-quality audio aids for museum visitors who are blind or have poor vision. Recent funding has supported the digitization of over 2,000 vertebrate specimens, including rare and endangered species. Beyond financial support, the MSU Museum benefits from IMLS policy papers, professional training opportunities and resources developed through the National Leadership Grants for Museums program. Our staff members also contribute to national campaigns spearheaded by the IMLS, such as its Strategies for Countering Antisemitism & Hate initiative. Through these efforts, the IMLS, alongside the American Alliance of Museums, operate as cornerstones of learning and innovation within the museum field. Looking beyond Michigan State, what might be lost with its shuttering? The IMLS is more than a grantmaking entity – it is the only federal agency dedicated to sustaining the entire museum and library ecosystem in the United States. Its funding has sustained museums, advanced digital preservation, expanded accessibility for low-income communities and fueled innovation in educational programming. In 2024 alone, the agency distributed US$266.7 million through grants, research initiatives and policy development. For example, ExplorationWorks, a children’s museum in Helena, Montana, received $151,946 in 2024 from the IMLS to expand its early childhood programs that serve low-income and rural families. Without this support, many institutions will struggle to hire and retain qualified staff, leading to fewer exhibitions, stalled research and reduced educational outreach. The consequences would be particularly severe for small museums and rural museums, which lack the fundraising capacity of larger urban institutions. They’re often the only sources of cultural and historical education in their regions, and their loss would create cultural voids that cannot easily be filled. Trump’s executive order dictated that the Institute of Museum and Library Services and other agencies be eliminated “to the maximum extent consistent with applicable law.” What is the applicable law in this case? I’m not a lawyer. But my understanding is that the “applicable law” in this case primarily refers to the Museum and Library Services Act, which, as I noted earlier, was created in 1996 and has been reauthorized multiple times since then. Since the IMLS was created through this congressional legislation, it cannot simply be eliminated by an executive order. Congress would need to pass a law to repeal or defund it. Additionally, the Antideficiency Act prohibits federal agencies from operating without appropriated funding. If Congress were to defund the IMLS rather than repeal its authorizing statute, the agency would be forced to cease operations due to a lack of money, even if the legal framework for its existence remained intact. Is there anything else you’d like to add? Museums are among the most trusted institutions in the country. They are rare bipartisan beacons of credibility in an era of deep division. A 2021 American Alliance of Museums report found that 97% of Americans view museums as valuable educational assets, while 89% consider them trustworthy sources of information. A 2022 American Library Association survey revealed that 89% of voters and 92% of parents believe local public libraries have an important role to play in communities. More than just cultural repositories, museums and libraries bring together citizens and offer learning opportunities for everyday people. By presenting science and history through engaging, evidence-based storytelling, museums help bridge ideological divides and encourage informed discourse. People of all political stripes rely on libraries for free internet access, job searches and literacy programs. The Institute of Museum and Library Services is central to this work. The agency provides leadership, while funding programs and research that help museums and libraries expand their offerings to reach all Americans. Stripping this support would threaten the sustainability of these institutions and weaken their ability to serve as pillars of education, civic engagement and truth. I see it as a disinvestment in an informed, connected and resilient society.



What causes the powerful winds that fuel dust storms, wildfires and blizzards? A weather scientist explains
Source: theconversation    Published: 2025-03-20 12:49:06

Windstorms can seem like they come out of nowhere, hitting with a sudden blast. They might be hundreds of miles long, stretching over several states, or just in your neighborhood. But they all have one thing in common: a change in air pressure. Just like air rushing out of your car tire when the valve is open, air in the atmosphere is forced from areas of high pressure to areas of low pressure. The stronger the difference in pressure, the stronger the winds that will ultimately result. Other forces related to the Earth’s rotation, friction and gravity can also alter the speed and direction of winds. But it all starts with this change in pressure over a distance – what meteorologists like me call a pressure gradient. So how do we get pressure gradients? Strong pressure gradients ultimately owe their existence to the simple fact that the Earth is round and rotates. Because the Earth is round, the sun is more directly overhead during the day at the equator than at the poles. This means more energy reaches the surface of the Earth near the equator. And that causes the lower part of the atmosphere, where weather occurs, to be both warmer and have higher pressure on average than the poles. Nature doesn’t like imbalances. As a result of this temperature difference, strong winds develop at high altitudes over midlatitude locations, like the continental U.S. This is the jet stream, and even though it’s several miles up in the atmosphere, it has a big impact on the winds we feel at the surface. Because Earth rotates, these upper-altitude winds blow from west to east. Waves in the jet stream – a consequence of Earth’s rotation and variations in the surface land, terrain and oceans – can cause air to diverge, or spread out, at certain points. As the air spreads out, the number of air molecules in a column decreases, ultimately reducing the air pressure at Earth’s surface. The pressure can drop quite dramatically over a few days or even just a few hours, leading to the birth of a low-pressure system – what meteorologists call an extratropical cyclone. The opposite chain of events, with air converging at other locations, can form high pressure at the surface. In between these low-pressure and high-pressure systems is a strong change in pressure over a distance – a pressure gradient. And that pressure gradient leads to strong winds. Earth’s rotation causes these winds to spiral around areas of high and low pressure. These highs and lows are like large circular mixers, with air blowing clockwise around high pressure and counterclockwise around low pressure. This flow pattern blows warm air northward toward the poles east of lows and cool air southward toward the equator west of lows. As the waves in the jet stream migrate from west to east, so do the surface lows and highs, and with them, the corridors of strong winds. That’s what the U.S. experienced when a strong extratropical cyclone caused winds stretching thousands of miles that whipped up dust storms and spread wildfires, and even caused tornadoes and blizzards in the central and southern U.S. in March 2025. Whipping up dust storms and spreading fires The jet stream over the U.S. is strongest and often the most “wavy” in the springtime, when the south-to-north difference in temperature is often the strongest. Winds associated with large-scale pressure systems can become quite strong in areas where there is limited friction at the ground, like the flat, less forested terrain of the Great Plains. One of the biggest risks is dust storms in arid regions of west Texas or eastern New Mexico, exacerbated by drought in these areas. When the ground and vegetation are dry and the air has low relative humidity, high winds can also spread wildfires out of control. Even more intense winds can occur when the pressure gradient interacts with terrain. Winds can sometimes rush faster downslope, as happens in the Rockies or with the Santa Ana winds that fueled devastating wildfires in the Los Angeles area in January. Violent tornadoes and storms Of course, winds can become even stronger and more violent on local scales associated with thunderstorms. When thunderstorms form, hail and precipitation in them can cause the air to rapidly fall in a downdraft, causing very high pressure under these storms. That pressure forces the air to spread out horizontally when it reaches the ground. Meteorologists call these straight line winds, and the process that forms them is a downburst. Large thunderstorms or chains of them moving across a region can cause large swaths of strong wind over 60 mph, called a derecho. Finally, some of nature’s strongest winds occur inside tornadoes. They form when the winds surrounding a thunderstorm change speed and direction with height. This can cause part of the storm to rotate, setting off a chain of events that may lead to a tornado and winds as strong as 300 mph in the most violent tornadoes. Tornado winds are also associated with an intense pressure gradient. The pressure inside the center of a tornado is often very low and varies considerably over a very small distance. It’s no coincidence that localized violent winds from thunderstorm downbursts and tornadoes often occur amid large-scale windstorms. Extratropical cyclones often draw warm, moist air northward on strong winds from the south, which is a key ingredient for thunderstorms. Storms also become more severe and may produce tornadoes when the jet stream is in close proximity to these low-pressure centers. In the winter and early spring, cold air funneling south on the northwest side of strong extratropical cyclones can even lead to blizzards. So, the same wave in the jet stream can lead to strong winds, blowing dust and fire danger in one region, while simultaneously triggering a tornado outbreak and a blizzard in other regions.



Ukraine war: how Zelensky rebuilt his relationship with Trump to turn the tables on Putin
Source: theconversation    Published: 2025-03-20 12:48:26

After Donald Trump’s “very good and productive” phone call with Vladimir Putin earlier this week, all eyes were on his subsequent call with Ukraine’s president, Volodymyr Zelensky. Would it, when they last met in the flesh on February 28 at the White House, descend into disastrous acrimony? Or would Zelensky manage to engage with the US president in a cooperative way that encourages him to see Ukraine and its leader in a more favourable light? The latter, it seems. In a post on his Truth Social site, Trump referred to their “very good telephone call”, which got the two leaders “very much on track”. Zelensky for his part, talked of a “very good” and “frank” phone call and seemed to agree with everything the US president had to say, taking pains to emphasise and praise Trump and America’s leadership. With his vocal support of Trump’s proposal for peace, Zelensky has put the attention back on Putin. He clearly wants to appear to be the more reasonable negotiating partner by going along with the US president’s proposals. In spite of Zelensky’s misgivings about how trustworthy Putin is, he has agreed to a limited ceasefire with Russia on energy infrastructure (while stressing that, unlike Putin, he agrees with Trump’s aim for a complete ceasefire). Zelensky clearly knows that Russia has a great deal to gain from a pause on attacks on energy grids and oil refineries, given Ukraine’s increasing capacity to use long-range drone attacks. And a maritime ceasefire, if agreed, would also favour Russia. Sign up to receive our weekly World Affairs Briefing newsletter from The Conversation UK. Every Thursday we’ll bring you expert analysis of the big stories in international relations. But by publicly voicing Ukraine’s support for Trump’s plan for a ceasefire, Zelensky has exposed Putin’s disinterest in stopping hostilities. In the call, Zelensky emphasised that Ukraine was happy to support the US call for a ceasefire, without conditions. Putin, meanwhile, in his call with Trump laid out a set of frankly unreasonable demands. These included the complete cessation of military aid and intelligence sharing by Ukraine’s allies, including the US. He also demanded a complete halt on Ukrainian troop mobilisation and rearmament. The demands were so ridiculous, they were designed to get Ukraine to reject them. Interestingly Trump, when he was interviewed after his phone call with Putin, denied that the pair had discussed aid. Crucially, he didn’t say whether this was something he would agree to. But the fact that the two leaders discussed the possibility of an ice hockey match between their two countries is an indication of how Putin is able to manipulate the US president with flattery. It helps that Trump clearly admires Putin and has repeatedly said that he trusts the Russian leader. Has Putin overplayed his hand? But this could come with a time limit. Trump, who wants a peace deal to trumpet as a crowning achievement, could well get tired of the fact that Putin has made no concessions to allow that deal to progress. The Russian leader is clearly hoping that by seeming to engage with the “peace” process, while at the same time dangling the prospect of doing business with Russia – for example by offering the US the chance to explore Russia’s own reserves of rare earth minerals – he can keep Trump on side. But while Trump still leans toward Putin, his relationship with Zelensky seems to have improved. The Ukrainian president appears to have learned that Trump doesn’t have a long memory and that flattery goes a long way with the US president. Trump, meanwhile, is no longer calling Zelensky a dictator, and as yet there is no mention of halting US military aid or intelligence to Ukraine. There is the opposite, in fact, as the US has said it will assist in finding more Patriot missile defence systems after Zelensky mentioned that they were sorely needed. By giving Trump credit for the ceasefire initiative, Zelensky is putting the ball in Russia’s court. And his apparent receptiveness to Trump’s idea about the US taking over Ukraine’s nuclear power plants will appeal to Trump’s transactional instincts. In addition to offering Trump business deals, Zelensky is now consistently offering Trump praise for his peace efforts. And it’s clear from the tone of the briefing given by White House press secretary, Karoline Leavitt, after the call that the US was happy with how it went. Leavitt stressed Zelensky’s praise for Trump’s leadership several times. Zelensky has also successfully turned Trump’s attention to the 35,000 missing children abducted from Ukraine into Russia during the war. The US state department had stopped tracking them and had deleted the evidence it had gathered, but Trump is now vowing to return the children home. Putin is generally thought to be stringing these negotiations out as long as possible in order to maximise the amount of Ukrainian territory his army occupies. This could be a risky strategy. Ending the war in Ukraine as quickly as possible was one of Trump’s repeated campaign promises. So the question is how long Trump can remain distracted or satisfied by Putin’s false engagement with the peace process. The American president seems to be changing his tune on Ukraine more generally. His disastrous Oval Office press conference last month with Zelensky was viewed by some as a ploy to portray Ukraine as a difficult and ungrateful partner compared to Russia who he maintained was only interested in achieving a peaceful end to the war. Now, with Zelensky seemingly agreeing with whatever Trump says, it’s become harder for him to take that line. For now, at least, the pressure is back on Putin.



University of California, Los Angeles on The Conversation
Source: theconversation    Published: 2025-03-20 12:48:10

Rebecca Noble/Getty Images; Kean Collection/Getty Images March 20, 2025 Tyrannical leader? Why comparisons between Trump and King George III miss the mark on 18th-century British monarchy Britain’s George III has gotten a bad rap. He was not the all-powerful monarch that President Trump allegedly aspires to be. AP Photo March 18, 2025 A brief history of Medicaid and America’s long struggle to establish a health care safety net Left out of FDR’s New Deal, the health insurance program for the poor was finally established in 1965. Unknown author via Wikimedia Commons March 17, 2025 Remembering China’s Empress Dowager Ling, a Buddhist who paved the way for future female rulers The empress, like many other rulers at the time, legitimized her reign through Buddhism, portraying herself either as a Buddha or as a patron of Buddhists. AP Photo/Evan Vucci March 17, 2025 Trump’s first term polarized teens’ views on racism and inequality A social scientist tracking adolescents’ beliefs and behaviors over time was uniquely positioned to document changes in teens’ worldviews after Trump’s 2016 election. C.P. George/ClassicStock via Getty Images March 10, 2025 America is becoming a nation of homebodies Even after the pandemic lockdowns were lifted, out-of-home activities and travel remained substantially depressed, far below 2019 levels. Fat Camera/E+ via Getty Images February 28, 2025 As flu cases break records this year, vaccine rates are declining, particularly for children and 65+ adults So far, fewer than half of US children and older adults have been vaccinated during this year’s high-severity flu season. AP Photo/Ben Curtis January 30, 2025 A federal policy expert weighs in on Trump’s efforts to stifle gender-affirming care for Americans under 19 While it doesn’t constitute a national ban on gender-affirming care for minors, the executive order contains provisions that could have a chilling effect on health care providers around the country. Darya Komarova/Getty Images January 30, 2025 Gen Z seeks safety above all else as the generation grows up amid constant crisis and existential threat Recent generations may have taken safety for granted, but today’s youth are growing up in an era of compounded crises – and being safe is their priority. arturbo/E+ via Getty Images January 22, 2025 “Olho por olho”: estudo analisa o valor de partes do corpo ao longo da história e entre culturas Pessoas de muitas culturas diferentes em todo o mundo e ao longo de milênios concordam amplamente sobre quais partes do corpo são mais valiosas e quanto de compensação elas merecem quando feridas. arturbo/E+ via Getty Images January 10, 2025 An eye for an eye: People agree about the values of body parts across cultures and eras People from many different cultures across the globe and across millennia largely agree about which body parts are most valuable – and how much compensation they warrant when injured. Reprodução de Instagram / Stuart Palley January 9, 2025 Ecologista americano explica o que são e como os ‘Ventos de Santa Ana’ alimentaram os incêndios mortais em Los Angeles Os ventos secos e fortes que descem as montanhas em direção à costa do sul da Califórnia em janeiro este ano foram muito mais violentos, alimentando e espalhando incêndios florestais que atingiram a área urbana de diversos bairros da cidade Ethan Swope/AP January 9, 2025 Comment les vents du désert alimentent les incendies qui ravagent la Californie du Sud La gravité des feux de Los Angeles de ce début janvier s'explique par les vents violents, la sécheresse exceptionnelle de la saison, et l'urbanisation de zones autrefois végétalisées. AP Photo/Ethan Swope January 9, 2025 Cómo los vientos de Santa Ana han provocado incendios mortales en el sur de California Los vientos secos y potentes que soplan desde las montañas hacia la costa del sur de California se combinan con el crecimiento de la población y de los tendidos eléctricos. AP Photo/Ethan Swope January 29, 2025 How Santa Ana winds fueled the deadly fires in Southern California Where people live today also makes a difference when it comes to fire risk. Charles A. Smith/Jackson State University/Historically Black Colleges & Universities via Getty Images December 17, 2024 More than 60 years later, Langston Hughes’ ‘Black Nativity’ is still a pillar of African American theater ‘Black Nativity’ may be different each time you see it − and that’s exactly what the playwright had in mind. rob dobi/Moment via Getty Images November 20, 2024 Legal complications await if OpenAI tries to shake off control by the nonprofit that owns the rapidly growing tech company When for-profit companies are spun out of nonprofits, there is no easy way out of the legal consequences.



Tyrannical leader? Why comparisons between Trump and King George III miss the mark on 18th-century British monarchy
Source: theconversation    Published: 2025-03-20 12:48:10

George III, king of Great Britain and its colonies at the time of the American Revolution, has been maligned unfairly. During both the first and now the second term of President Donald Trump, commentators in the U.S. have invoked the king’s misdeeds to criticize Trump. When the president bypassed Congress to create a new government agency, appointed its head and stopped payment of millions of dollars of allocated federal funds, his critics noted that he assumed the role of Congress, a power grab that supposedly made him similar to George III. According to this criticism, the president engaged in tyranny, just as the founders accused George of doing. As a scholar of early America, I believe, however, that George III has gotten a bad rap. He was not the all-powerful monarch that Trump allegedly aspires to be. In the 1770s, the power of the British king was limited by the authority of Parliament. In that system, which Americans and others praised at the time as balanced, the king and the legislature each had specific duties and powers so that neither could control the government alone. George III was not an absolutist monarch, to use the language of the day for a power-hungry ruler. The English had struggled in the previous century over the extent of the king’s power. After fighting two civil wars, executing one king, and, eventually, forcing the monarch to agree to rule with Parliament rather than on his own, they believed their liberties were safeguarded. This system, known as limited monarchy, was the pride of Great Britain. It was also admired by the American founders. As late as 1774, in his Summary View of the Rights of British America, Thomas Jefferson praised the “free and ancient principles” of the British constitution in which “kings are the servants, not the proprietors of the people.” No kingly tyranny Britons, whether in Great Britain or the colonies, did fear a tyrant, a controlling and abusive leader. Some fears came from their study of political theory, which taught that government worked best when composed of various branches that represented the concerns of the different political classes. As this theory went, an unbalanced government would descend into tyranny with a too-powerful monarch; oligarchy under a dominant aristocratic class; or anarchy with the people out of control. They believed these perils could be avoided only by maintaining balance. Even though the British did not fear imbalance or a tyrant king in their own case, they could see the danger threatening elsewhere in Europe. France represented a worst-case scenario. Its absolutist kings had ruled without France’s legislature – the Estates General – for more than a century and a half at the time of the American Revolution. British poet Robert Wolseley’s often reprinted poem declared: “Let France grow proud beneath the tyrant’s lust, While the rackt people crawl and lick the dust. The mighty Genius of this isle disdains Ambitious slavery and golden chains.” Within a few years, Anglo-American criticism of kingly tyranny in France would be validated: That country descended into a violent revolution that resulted in decades of warfare and political violence, including the execution of the entire royal family. This experience confirmed for the British and Americans that a balanced system was best and that they should count their blessings. Why revolt? If the American revolutionaries admired the British system and sought to copy it in the United States, why did they reject the link to Britain and revolt in the first place? Americans did not revolt against the nature of British government. Rather they objected to their changing place within the British Empire. The revolutionary crisis had a number of roots, but most of them arose out of changes in the management of the relationship between the American Colonies and the imperial center. From the 1760s, the British government took a more activist role in its American Colonies, limiting their geographical expansion and imposing taxes directly on the population. In the past, Colonists had been free to move west, challenged only by the indigenous residents who fought to defend their lands. Now the British government, aiming to put an end to these wars, blocked expansion. At the same time, to pay down the debt accrued in recent war with France – and fought in part in North America – the government levied taxes not via the Colonial legislatures, as it had before, but directly on residents. This change sparked revolt and, eventually, revolution. Turning on the king Before 1776, the Colonists believed that George III would come to their rescue and halt these changes imposed by Parliament. They thought initially that he did not realize how the new policies affected them. Only in 1776 did they accept that George III supported the policy changes and would not defend their rights. It was in that context that they turned on him and declared him tyrannical, blaming him for the new policies and calling for a break with Britain. As the Declaration of Independence said: “The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States.” Although they complained about the tyranny of George III, their true objection was that their subordinate position within the empire gave them little leverage when opposing policies that king and Parliament agreed to impose on them. Once independent, the founders created a system that imitated the British model of mixed governance and created barriers – the powers of Congress and the oversight of the Supreme Court – that they hoped would safeguard their liberties against the threat of renewed tyranny.



Atlantic sturgeon were fished almost to extinction − ancient DNA reveals how Chesapeake Bay population changed over centuries
Source: theconversation    Published: 2025-03-20 12:47:25

Sturgeons are one of the oldest groups of fishes. Sporting an armor of five rows of bony, modified scales called dermal scutes and a sharklike tail fin, this group of several-hundred-pound beasts has survived for approximately 160 million years. Because their physical appearance has changed very little over time, supported by a slow rate of evolution, sturgeon have been called living fossils. Despite their survival through several geological time periods, many present-day sturgeon species are at threat of extinction, with 17 of 27 species listed as “critically endangered.” Conservation practitioners such as the Virginia Commonwealth University monitoring team are working hard to support recovery of Atlantic sturgeon in the Chesapeake Bay area. But it’s not clear what baseline population level people should strive toward restoring. How do today’s sturgeon populations compare with those of the past? Matt Balazik We are a molecular anthropologist and a biodiversity scientist who focus on species that people rely on for subsistence. We study the evolution, population health and resilience of these species over time to better understand humans’ interaction with their environments and the sustainability of food systems. For our recent sturgeon project, we joined forces with fisheries conservation biologist Matt Balazik, who conducts on-the-ground monitoring of Atlantic sturgeon, and Torben Rick, a specialist in North American coastal zooarchaeology. Together, we wanted to look into the past and see how much sturgeon populations have changed, focusing on the James River in Virginia. A more nuanced understanding of the past could help conservationists better plan for the future. Sturgeon loomed large for millennia In North America, sturgeon have played important subsistence and cultural roles in Native communities, which marked the seasons by the fishes’ behavioral patterns. Large summertime aggregations of lake sturgeon (Acipenser fulvescens) in the Great Lakes area inspired one folk name for the August full moon – the sturgeon moon. Woodland Era pottery remnants at archaeological sites from as long as 2,000 years ago show that the fall and springtime runs of Atlantic sturgeon (Acipenser oxyrinchus) upstream were celebrated with feasting. Logan Kistler and Natalia Przelomska Archaeological finds of sturgeon remains support that early colonial settlers in North America, notably those who established Jamestown in the Chesapeake Bay area in 1607, also prized these fish. When Captain John Smith was leading Jamestown, he wrote “there was more sturgeon here than could be devoured by dog or man.” The fish may have helped the survival of this fortress-colony that was both stricken with drought and fostering turbulent relationships with the Native inhabitants. This abundance is in stark contrast to today, when sightings of migrating fish are sparse. Exploitation during the past 300 years was the key driver of Atlantic sturgeon decline. Demand for caviar drove the relentless fishing pressure throughout the 19th century. The Chesapeake was the second-most exploited sturgeon fishery on the Eastern Seaboard up until the early 20th century, when the fish became scarce. Matt Balazik At that point, local protection regulations were established, but only in 1998 was a moratorium on harvesting these fish declared. Meanwhile, abundance of Atlantic sturgeon remained very low, which can be explained in part by their lifespan. Short-lived fish such as herring and shad can recover population numbers much faster than Atlantic sturgeon, which live for up to 60 years and take a long time to reach reproductive age – up to around 12 years for males and as many as 28 years for females. To help manage and restore an endangered species, conservation biologists tend to split the population into groups based on ranges. The Chesapeake Bay is one of five “distinct population segments” the U.S. Endangered Species Act listing in 2012 created for Atlantic sturgeon. Since then, conservationists have pioneered genetic studies on Atlantic sturgeon, demonstrating through the power of DNA that natal river – where an individual fish is born – and season of spawning are both important for distinguishing subpopulations within each regional group. Scientists have also described genetic diversity in Atlantic sturgeon; more genetic variety suggests they have more capacity to adapt when facing new, potentially challenging conditions. Przelomska NAS et al., Proc. R. Soc. B 291: 20241145 , CC BY Sturgeon DNA, then and now Archaeological remains are a direct source of data on genetic diversity in the past. We can analyze the genetic makeup of sturgeons that lived hundreds of years ago, before intense overfishing depleted their numbers. Then we can compare that baseline with today’s genetic diversity. The James River was a great case study for testing out this approach, which we call an archaeogenomics time series. Having obtained information on the archaeology of the Chesapeake region from our collaborator Leslie Reeder-Myers, we sampled remains of sturgeon – their scutes and spines – at a precolonial-era site where people lived from about 200 C.E. to about 900 C.E. We also sampled from important colonial sites Jamestown (1607-1610) and Williamsburg (1720-1775). And we complemented that data from the past with tiny clips from the fins of present-day, live fish that Balazik and his team sampled during monitoring surveys. Torben Rick and Natalia Przelomska DNA tends to get physically broken up and biochemically damaged with age. So we relied on special protocols in a lab dedicated to studying ancient DNA to minimize the risk of contamination and enhance our chances of successfully collecting genetic material from these sturgeon. Atlantic sturgeon have 122 chromosomes of nuclear DNA – over five times as many as people do. We focused on a few genetic regions, just enough to get an idea of the James River population groupings and how genetically distinct they are from one another. We were not surprised to see that fall-spawning and spring-spawning groups were genetically distinct. What stood out, though, was how starkly different they were, which is something that can happen when a population’s numbers drop to near-extinction levels. We also looked at the fishes’ mitochondrial DNA, a compact molecule that is easier to obtain ancient DNA from compared with the nuclear chromosomes. With our collaborator Audrey Lin, we used the mitochondrial DNA to confirm our hypothesis that the fish from archaeological sites were more genetically diverse than present-day Atlantic sturgeon. Strikingly, we discovered that mitochondrial DNA did not always group the fish by season or even by their natal river. This was unexpected, because Atlantic sturgeon tend to return to their natal rivers for breeding. Our interpretation of this genetic finding is that over very long timescales – many thousands of years – changes in the global climate and in local ecosystems would have driven a given sturgeon population to migrate into a new river system, and possibly at a later stage back to its original one. This notion is supported by other recent documentation of fish occasionally migrating over long distances and mixing with new groups. Our study used archaeology, history and ecology together to describe the decline of Atlantic sturgeon. Based on the diminished genetic diversity we measured, we estimate that the Atlantic sturgeon populations we studied are about a fifth of what they were before colonial settlement. Less genetic variability means these smaller populations have less potential to adapt to changing conditions. Our findings will help conservationists plan into the future for the continued recovery of these living fossils.



5 years on, true counts of COVID-19 deaths remain elusive − and research is hobbled by lack of data
Source: theconversation    Published: 2025-03-20 12:47:06

In the early days of the COVID-19 pandemic, researchers struggled to grasp the rate of the virus’s spread and the number of related deaths. While hospitals tracked cases and deaths within their walls, the broader picture of mortality across communities remained frustratingly incomplete. Policymakers and researchers quickly discovered a troubling pattern: Many deaths linked to the virus were never officially counted. A study analyzing data from over 3,000 U.S. counties between March 2020 and August 2022 found nearly 163,000 excess deaths from natural causes that were missing from official mortality records. Excess deaths, meaning those that exceed the number expected based on historical trends, serve as a key indicator of underreported deaths during health crises. Many of these uncounted deaths were later tied to COVID-19 through reviews of medical records, death certificates and statistical modeling. In addition, lack of real-time tracking for medical interventions during those early days slowed vaccine development by delaying insights into which treatments worked and how people were responding to newly circulating variants. Five years since the beginning of COVID-19, new epidemics such as bird flu are emerging worldwide, and researchers are still finding it difficult to access the data about people’s deaths that they need to develop lifesaving interventions. How can the U.S. mortality data system improve? I’m a technology infrastructure researcher, and my team and I design policy and technical systems to reduce inefficiency in health care and government organizations. By analyzing the flow of mortality data in the U.S., we found several areas of the system that could use updating. Critical need for real-time data A death record includes key details beyond just the fact of death, such as the cause, contributing conditions, demographics, place of death and sometimes medical history. This information is crucial for researchers to be able to analyze trends, identify disparities and drive medical advances. Approximately 2.8 million death records are added to the U.S. mortality data system each year. But in 2022 – the most recent official count available – when the world was still in the throes of the pandemic, 3,279,857 deaths were recorded in the federal system. Still, this figure is widely considered to be a major undercount of true excess deaths from COVID-19. In addition, real-time tracking of COVID-19 mortality data was severely lacking. This process involves the continuous collection, analysis and reporting of deaths from hospitals, health agencies and government databases by integrating electronic health records, lab reports and public health surveillance systems. Ideally, it provides up-to-date insights for decision-making, but during the COVID-19 pandemic, these tracking systems lagged and failed to generate comprehensive data. Without comprehensive data on prior COVID-19 infections, antibody responses and adverse events, researchers faced challenges designing clinical trials to predict how long immunity would last and optimize booster schedules. Such data is essential in vaccine development because it helps identify who is most at risk, which variants and treatments affect survival rates, and how vaccines should be designed and distributed. And as part of the broader U.S. vital records system, mortality data is essential for medical research, including evaluating public health programs, identifying health disparities and monitoring disease. At the heart of the problem is the inefficiency of government policy, particularly outdated public health reporting systems and slow data modernization efforts that hinder timely decision-making. These long-standing policies, such as reliance on paper-based death certificates and disjointed state-level reporting, have failed to keep pace with real-time data needs during crises such as COVID-19. These policy shortcomings lead to delays in reporting and lack of coordination between hospital organizations, state government vital records offices and federal government agencies in collecting, standardizing and sharing death records. History of US mortality data The U.S. mortality data system has been cobbled together through a disparate patchwork of state and local governments, federal agencies and public health organizations over the course of more than a century and a half. It has been shaped by advances in public health, medical record-keeping and technology. From its inception to the present day, the mortality data system has been plagued by inconsistencies, inefficiencies and tensions between medical professionals, state governments and the federal government. The first national efforts to track information about deaths began in the 1850s when the U.S. Census Bureau started collecting mortality data as part of the decennial census. However, these early efforts were inconsistent, as death registration was largely voluntary and varied widely across states. In the early 20th century, the establishment of the National Vital Statistics System brought greater standardization to mortality data. For example, the system required all U.S. states and territories to standardize their death certificate format. It also consolidated mortality data at the federal level, whereas mortality data was previously stored at the state level. However, state and federal reporting remained fragmented. For example, states had no unifom timeline for submitting mortality data, resulting in some states taking months or even years to finalize and release death records. Local or state-level paperwork processing practices also remained varied and at times contradictory. To begin to close gaps in reporting timelines to aid medical researchers, in 1981 the National Center for Health Statistics – a division of the Centers for Disease Control and Prevention – introduced the National Death Index. This is a centralized database of death records collected from state vital statistics offices, making it easier to access death data for health and medical research. The system was originally paper-based, with the aim of allowing researchers to track the deaths of study participants without navigating complex bureaucracies. As time has passed, the National Death Index and state databases have become increasingly digital. The rise of electronic death registration systems in recent decades has improved processing speed when it comes to researchers accessing mortality data from the National Death Index. However, while the index has solved some issues related to gaps between state and federal data, other issues, such as high fees and inconsistency in state reporting times, still plague it. Accessing the data that matters most With the Trump administration’s increasing removal of CDC public health datasets, it is unclear whether policy reform for mortality data will be addressed anytime soon. Experts fear that the removal of CDC datasets has now set precedent for the Trump administration to cross further lines in its attempts to influence the research and data published by the CDC. The longer-term impact of the current administration’s public health policy on mortality data and disease response are not yet clear. What is clear is that five years since COVID-19, the U.S. mortality tracking system remains unequipped to meet emerging public health crises. Without addressing these challenges, the U.S. may not be able to respond quickly enough to public health crises threatening American lives.



Insomnia can lead to heart issues − a psychologist recommends changes that can improve sleep
Source: theconversation    Published: 2025-03-20 12:46:53

About 10% of Americans say they have chronic insomnia, and millions of others report poor sleep quality. Ongoing research has found that bad sleep could lead to numerous health problems, including heart disease. Dr. Julio Fernandez-Mendoza is a professor of psychiatry and behavioral health, neuroscience and public health sciences at Penn State College of Medicine. He discusses the need for sleep, why teenagers require more sleep than adults, and how you can get a good night’s sleep without medications. The Conversation has collaborated with SciLine to bring you highlights from the discussion that have been edited for brevity and clarity. How much sleep is enough for adults and for adolescents? Julio Fernandez-Mendoza: Adults who report getting about seven to eight hours of sleep per night generally have the best health, in terms of both physical and mental health, and longevity. But that recommendation changes with age. Adults over age 65 may need just six to seven hours of sleep per night. So older people, if otherwise healthy, should not feel anxious if they’re getting just six hours. Young people need the most – at least nine hours – and some younger children may need more. How can insufficient sleep harm our health? Fernandez-Mendoza: Our team was the first to show that those complaining about insomnia – difficulty falling or staying asleep – were more likely to have high blood pressure and be at risk for heart disease. In both teens and adults, we found that insomnia and shortened sleep may lead to elevated stress, hormone levels and inflammation. These problems tend to show up before you develop heart disease. What about people who have more serious sleep problems? Fernandez-Mendoza: Good sleep hygiene habits include cutting down on caffeine and alcohol, quitting smoking and exercising regularly. I also recommend not skipping meals, not eating too late at night and not eating too much. But people with a persistent sleep problem may need to make more behavioral changes. Research studies point to a set of six rules that can improve your sleep. You can follow these changes consistently in the short term, and then choose how to adapt them into your lifestyle down the road. First, get up at the same time no matter what. No matter how much sleep you get. This will anchor your sleep/wake cycle, called your circadian rhythm. Second, do not use your bed for anything except sleep and sexual activity. Third, when you can’t sleep, don’t lie in bed awake. Instead, get out of bed, go into another room if you can, and do an activity that’s enjoyable or relaxing. Go back to bed only when you’re ready to sleep. Fourth, get going with daily activities even after a poor night’s sleep. Don’t try to compensate for sleep loss. If you have chronic insomnia, don’t nap, sleep in, or doze during the day or evening even after poor sleep the previous night. Fifth, go to bed only when you’re actually sleepy enough to fall asleep. And sixth, start with the amount of sleep you’re now getting – with the lowest limit at five hours – and then increase it weekly by 15 minutes. These six rules are evidence-based and go above and beyond simple sleep hygiene habits. If they don’t work, see a provider who can help you. Do you have advice specifically for adolescents? Fernandez-Mendoza: Adolescence is a unique developmental period. It’s not just the obvious physical, emotional and behavioral changes that occur during adolescence and puberty – there are changes in a teenager’s brain that can alter their sleep patterns. When an adolescent goes through puberty, their internal clock changes so that their sleep schedule shifts to later hours. While it’s true that adolescents are more engaged at night because of their social relationships, there’s also biology behind why they want to stay up late – their internal clocks have shifted. It’s not just choice. School start times for most adolescents are at odds with that biological shift. So they don’t get enough sleep, which affects their performance in school. Research suggests that schools with later start times are more closely aligned with the science on child development and don’t put adolescents at risk by making them wake up earlier than their bodies are biologically inclined to. Parents can help their teens get better sleep. Set a time for kids to stop doing homework and put away electronics. Instead, they can watch TV with the family or read – something relaxing and enjoyable that will help them wind down before bed. You can also gradually move back their wake-up time. Start on weekends, waking them up 30 minutes earlier every day, including school days, until the child reaches the desired wake-up time. Don’t try to reshift them suddenly – for example, waking up a teenager at 5 a.m. like it’s the military – because that doesn’t work. They won’t get used to it, since it’s at odds with their internal clock. So, do it little by little. If that doesn’t work, see a clinical provider. What kind of treatments can a sleep clinician provide? Fernandez-Mendoza: People should get help if they feel they sleep poorly, if they’re fatigued during the day, or if they snore or grind their teeth. All these issues deserve attention. Some people may think a sleep provider just prescribes expensive medication, but that’s not true. There are behavioral, non-drug-based treatments that work. Cognitive behavioral therapy is the first-line treatment recommended for insomnia. Light therapy may also help, which is the use of a bright light therapy lamp at a given time during the day or evening, depending on the person’s sleep problem. Watch the full interview to hear more. SciLine is a free service based at the American Association for the Advancement of Science, a nonprofit that helps journalists include scientific evidence and experts in their news stories.



Australia’s PBS means consumers pay less for expensive medicines. Here’s how this system works
Source: theconversation    Published: 2025-03-20 12:27:39

The United States pharmaceutical lobby has complained to US President Donald Trump that Australia’s Pharmaceutical Benefits Scheme (PBS) is damaging their profits and has urged Trump to put tariffs on pharmaceutical imports from Australia. Prime Minister Anthony Albanese defended the scheme, saying Australia’s pharmaceutical subsidy scheme was “not up for negotiation”. Opposition Leader Peter Dutton said he would also protect the PBS, which was the “envy of the world”. But what exactly is the PBS, and why does it matter? How did the PBS start? In the early 1900s, Australians had to pay for medicines out-of-pocket. Some could get free or cheap medicines at public hospitals or through Friendly Society Dispensaries, but otherwise access was restricted to those who could afford to pay. At the time, few effective medicines were available. But the development of insulin and penicillin in the 1920s made access to medicines much more important. The Constitution gave the federal government limited powers in the provision of health and welfare, which were largely the responsibility of the states. After World War II, the federal government wanted to expand these powers but it encountered several constitutional roadblocks. A rare successful referendum in 1946 changed that, enabling the National Health Act 1953 to pass. This established the PBS as we know it today. How does the PBS work in practice? The PBS covers the cost of medicines prescribed by doctors. Most are dispensed at community pharmacies (such as treatments for heart disease, the pill and antibiotics), but some more expensive ones are available at public hospitals or specialist treatment centres (such as chemotherapies and IVF medicines). In 2023–24 there were 930 different medicines and 5,164 brands listed on the PBS, costing the government $17.7 billion. The government negotiates the price of each medicine with the pharmaceutical company. Pharmacies then buy these medicines from wholesalers or companies. When a patient fills a prescription at a pharmacy, they pay a co-payment. The government pays the difference between the agreed price and the co-payment to the pharmacy – costs that may amount to hundreds of thousands of dollars. There are two co-payments: one for concession card holders ($7.70) and one for the general consumer ($31.60). When a patient hits the annual spending limit (safety net threshold), the co-payment falls to $0 for concession patients and $7.70 for the general consumer. Overall, patients contribute 8.4% to the total cost of the PBS, while the government pays the rest. How are medicine prices set? The PBS is split into two categories: – F1: new, patent-protected medicines with no competition – F2: medicines with multiple brands, including generics. F1 medicines To be listed on the PBS, a new medicine goes through the following process: It’s evaluated for safety, efficacy and quality. A panel of experts (including doctors, pharmacists, epidemiologists, health economists, health consumer advocates and a pharmaceutical industry representative) recommends which medicines should be listed on the PBS, based on effectiveness, safety, cost-effectiveness and the total cost on the budget of the medicine versus alternative treatments. If the panel recommends a medicine, the price and details of the listing may be further negotiated with the government. (If the panel rejects a medicine, companies may revise their application and re-submit.) Finally, the health minister, and subsequently the Cabinet, formally approves or rejects the panel’s recommendation. If approved, the medicine is listed on the PBS. F2 medicines Generic medicine companies may apply to list another brand on the PBS after a medicine loses patent protection. When this happens, the medicine moves from F1 to F2. Immediately, it incurs a mandatory price discount. Generic medicine companies may offer pharmacists discounts on the PBS list price (for example, ten for the price of nine). Pharmacists then encourage patients to switch to the cheaper medicine. Companies must disclose these discounts to the government, resulting in further price reductions. Is the PBS system unique? Australia is not special. Many countries use similar assessments to determine whether governments should subsidise new medicines, including the National Institute for Health and Care Excellence (NICE) in the United Kingdom, Canada’s Drug Agency, and Pharmac in New Zealand. Small differences exist, including whether the list of medicines is a positive (and they’re subsidised) or negative (meaning they’re not subsidised), whether the lists are established at the central level (such as the PBS in Australia) or local level (such as by province in Canada) or a mixture, and how co-payments are set. The biggest outlier is the US. Similar to its health system, the medicines system is a complex and decentralised mix of public and private organisations, including government agencies, independent organisations, health-care providers and payers such as health insurers. What are the benefits of the PBS? The PBS ensures all Australian patients have access to highly effective medicines. This contributes to a high life expectancy, while keeping health-care costs low relative to other developed countries. This has been achieved by keeping prices down for both F1 and F2 medicines. By doing so, it creates room in the government budget to fund other new medicines. Without the PBS, either taxes or co-payments would have to increase, or fewer medicines funded. Other benefits include having a level playing field for all medicines, while maintaining flexibility to fund highly effective medicines for patients with unmet needs. What are the drawbacks of the PBS system? No system is without its drawbacks and risks. The PBS’s drawbacks include: limited patient involvement in the process the high frequency of re-submissions and delays to PBS listing companies being unwilling to submit off-patent medicines for PBS listing due to high costs and low rewards the ongoing lack of high-quality clinical evidence about medicines to treat rare diseases and certain patient populations, such as children. Another issue is medicine shortages. When PBS-listed brands aren’t available due to supply chain issues, other non-PBS listed brands may be available at full cost to the patient. Increased medicine costs can discourage patients from filling necessary prescriptions, which can have longer-term impacts on health and health expenditure. Finally, companies have argued Australia’s small market size plus low PBS prices can make it financially unviable to bring new medicines to Australia. The PBS is a crucial part of Australia’s health system, making essential medicines affordable, while keeping costs down. Like any system, it has its challenges and there is ongoing debate about whether and how the system should change. Read more: Will the US trade war push up the price of medicines in Australia? Will there be drug shortages?



Ancient Egyptian soldiers and Greek mercenaries were at 'Armageddon' when biblical king was killed, study suggests
Source: livescience    Published: 2025-03-20 12:10:00

The large numbers of pottery fragments from seventh century B.C. Egypt and Greece unearthed at the Megiddo archaeological site correspond to references in the Bible. New archaeological evidence from the ancient city of Megiddo — the location of the final battle "Armageddon" in the Book of Revelation — supports the biblical story of an Israelite king and Egyptian pharaoh clashing there more than 2,600 years ago. According to the Hebrew Bible and the Christian Old Testament (which are slightly different collections of ancient Hebrew writings), the Kingdom of Judah 's King Josiah confronted the Egyptian Pharaoh Necho II at Megiddo in 609 B.C. Now, an analysis of ancient pottery fragments indicates that Megiddo was indeed occupied by the Egyptians at that time, Israel Finkelstein , an archaeologist at the University of Haifa and Tel Aviv University, told Live Science. Finkelstein is the lead author of a study describing the finds, which was published Jan. 28 in The Scandinavian Journal of the Old Testament . He said in an email that large numbers of Egyptian pottery fragments had been discovered alongside Greek pottery fragments in a layer dating to the late seventh century — a time when Egypt often employed Greek mercenaries alongside Egyptian troops. The researchers determined the origins of the fragments by examining the type of clay and their style. The fragments support the biblical accounts that Egyptian forces were at Megiddo during Josiah's reign. However, the findings aren't direct evidence that Josiah was at the battle. If he was there, as the Bible says, it's unclear if Josiah died from wounds he'd suffered during a battle against the Egyptians at Megiddo, or if he was executed there as a vassal of the pharaoh. Josiah's death was later said to foretell the fall of Jerusalem in 586 B.C. to the Neo-Babylonians under Nebuchadnezzar II, whose forces destroyed the First Temple, also known as Solomon's Temple. Related: 1,800-year-old 'Iron Legion' Roman base discovered near 'Armageddon' is largest in Israel Archaeologist Assaf Kleiman of Ben-Gurion University, a study co-author, told Live Science in an email that the confrontation between the two rulers was described differently in two different places in the Bible. "The Josiah-Necho event at Megiddo in 609 BCE is described in the Bible twice: as an execution in a short chronistic verse in Kings and as a decisive battle in Chronicles," he said. The Book of Kings was written close to the time of the reported events, but the Book of Chronicles was composed centuries later, so the account in the Book of Kings was more reliable, he said. Image 1 of 2 (Image credit: Megiddo Expedition) The finds indicate that Egyptian troops were stationed there with a contingent of Greek mercenaries when the biblical King Josiah of Judea was killed in 609 B.C. (Image credit: Megiddo Expedition) Experts say it is not clear whether King Josiah went to Megiddo to do battle as an enemy or as the leader of a vassal state who was executed by Pharaoh Necho. Ancient city The ruins of Megiddo are now in a national park about 18 miles (30 kilometers) southeast of Haifa. Megiddo was a strategically important city at a crossroads on trade and military routes, and it was occupied at different times by Canaanites, Israelites, Assyrians, Egyptians and Persians. Many great battles occurred at Megiddo, and its name inspired the word "Armageddon"—the location of a final battle prophesied in the New Testament's Book of Revelation , which now refers generally to the idea of the end of the world. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors Excavations at Megiddo have unearthed more than 20 archaeological layers since the 1920s. The layer with Egyptian and Greek pottery fragments described in the latest study is among several layers that date from after 732 B.C., when records indicate Megiddo was conquered by the Neo-Assyrians under their king Tiglath-Pileser III. According to the Bible, the northern kingdom of Israel fell to the Neo-Assyrians about 10 years later, followed by the expulsion of the "10 tribes" or "lost tribes" of Israel. Image 1 of 2 (Image credit: Megiddo Expedition and Sasha Flit/Tel Aviv University) The many fragments of Greek pottery found at Megiddo suggest a contingent of Greek mercenaries were stationed there alongside Egyptian troops. (Image credit: Megiddo Expedition and Yevgeni Ostrovsky/Ben-Gurion) The many Egyptian pottery fragments found in the same place indicate Megiddo was then under the military control of a force of Egyptian troops. Battle or execution There is debate among academics whether the encounter between Josiah and Necho at Megiddo in 609 B.C. was actually a battle, or if Necho had merely executed his vassal Josiah there—in other words, whether the southern Israelite kingdom of Judah was subordinate to Egypt at that time. The Bible does not record this, but history and archaeology indicate Egypt took over the region after 630 B.C. as Neo-Assyrian power declined. Historian Jacob Wright , a professor of Hebrew Bible at Emory University who was not involved in the study, told Live Science that Josiah had probably traveled from Jerusalem to Megiddo to pay homage to Necho but was executed there for an unknown reason. Wright and Reinhard Kratz , a historian at the University of Göttingen in Germany who was also not involved in the study, both noted that the relevant verse in the Book of Kings says only that Josiah traveled to Megiddo and was "put to death" there — and that nothing was written about a battle until more than 100 years later in the Book of Chronicles. The authors of the new study, too, are cautious about the circumstances of Josiah's death. Finkelstein noted that Josiah was considered an exceptionally pious king, and that the idea of "Armageddon" had only begun after his death. This suggests Josiah's death had led to prophecies that the final battle between the forces of God and the forces of evil would take place where he died, Finkelstein said. Editor's note: This article was updated at 9:28 a.m. ET to correctly attribute a quote explaining that the Josiah-Necho event at Megiddo is described twice in the Bible. That quote was said by archaeologist Assaf Kleiman of Ben-Gurion University, not by Israel Finkelstein, an archaeologist at the University of Haifa and Tel Aviv University.



What is babesiosis? The parasitic infection that 'eats' your red blood cells
Source: livescience    Published: 2025-03-20 11:00:00

Disease name: Babesiosis Affected populations: Babesiosis is a rare and potentially fatal parasitic disease that destroys red blood cells, the cells that supply tissues with oxygen from the lungs . The disease, which is spread by ticks, occurs worldwide , including in the United States and Europe. Fewer than 3,000 cases of babesiosis are reported annually in the U.S., and they most commonly occur between May and September in the upper Midwest and Northeast, including in Minnesota, Wisconsin, Connecticut and New York. Cases tend to rise in the spring and summer as this is when people are most likely to be in contact with the ticks that spread the disease. Causes: Babesiosis is caused by microscopic parasites that belong to the genus Babesia. These parasites usually infect cattle and are spread between animals by ticks that eat the blood of different hosts. Related: Tick-borne parasite is spreading in the Northeast, CDC says Once inside the body, Babesia parasites invade and destroy red blood cells. This severely limits the ability of these cells to supply tissues with oxygen. While more than 100 species of Babesia parasites have been identified, overall, only a few species are known to infect humans. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors In the U.S., most babesiosis infections are caused by a parasite species called Babesia microti and are spread by blacklegged ticks (Ixodes scapularis), also known as deer ticks. These ticks are typically found in wooded, brushy or grassy areas. In rarer instances, Babesia parasites can be spread from one person to another via contaminated blood transfusions , and they can also spread from mother to fetus across the placenta. Symptoms: Most people exposed to Babesia parasites don't have any symptoms of babesiosis ; this is especially true for young, healthy people. However, in individuals who have weakened immune systems or who are over the age of 50 , the parasites can trigger severe disease. People who have had their spleen removed are also more vulnerable to serious infections than the average person, because the spleen normally helps remove infected red blood cells from the body . The parasites known to cause babesiosis are shown here infecting red blood cells under the microscope. (Image credit: Smith Collection/Gado / Contributor via Getty Images) Typical symptoms of babesiosis include fever, chills, sweating, muscle aches and pains, as well as swelling of the liver and spleen and having a low level of red blood cells in the body. Symptoms usually emerge within one to four weeks of a person being infected with Babesia parasites, and they can last for several days after onset. Serious cases of babesiosis can cause multiorgan failure and death, as tissues are starved of oxygen. Estimates for death rates from babesiosis vary considerably between studies. However, surveillance data gathered in 2019 by the Centers for Disease Control and Prevention (CDC) found a 0.57% death rate among patients in the U.S. Death rates may be closer to 20% in patients who belong to high-risk groups, even when they receive treatment. Treatments: Patients who don't have symptoms of babesiosis usually don't require treatment , as the immune system will typically clear the parasites away within one to two weeks . In symptomatic patients, the main treatment for babesiosis is a combination of antiparasitic drugs and antibiotics. The latter drugs are primarily used to treat bacterial infections, rather than parasitic infections, but certain kinds of antibiotics, such as clindamycin, can also be effective against parasites . Patients who are very sick may also require a blood transfusion to replace their damaged and infected red blood cells. The best way to prevent babesiosis is to avoid areas where ticks live, according to the CDC . If you are in those areas, there are precautions you can take to avoid tick bites .



Will the US trade war push up the price of medicines in Australia? Will there be drug shortages?
Source: theconversation    Published: 2025-03-20 10:27:13

Talks of a trade dispute between the United States and Australia over the cost of medicines have no doubt left many Australians scratching their heads. With all this talk of attacks on the Pharmaceutical Benefits Scheme (PBS), and the prospect of a tariff on Australian drugs entering the US, many will be wondering about two key issues. Does this mean the price of medicines will rise? And could any fall-out from the trade dispute lead to drug shortages? Let’s see how this could play out domestically. What is the Pharmaceutical Benefits Scheme? The PBS provides Australians with subsidised medicines, keeping out-of-pocket costs low for consumers. To receive the subsidy from Australian taxpayers all drug companies (not just US ones) must submit evidence to the Pharmaceutical Benefits Advisory Committee (PBAC) which assesses if the drug is cost-effective compared to existing alternatives. This process ensures Australian taxpayers get value for money for drugs and that the government is not wasting money on drugs that are too costly for the benefits they provide. With limited resources, the federal government needs to decide which drugs to subsidise. Our centre has a contract with the federal government to review submissions to the PBAC. Once the PBAC makes its recommendations to list a drug onto the PBS, the federal government then enters into bilateral (one on one) negotiations with each drug company over the price they will charge in Australia. These price negotiations often involve confidential discounts and rebates, which can also cause delays in listing on the PBS and to people accessing them at the subsidised rate. Patients pay a fixed co-payment under the PBS regardless of the negotiated price. That’s currently A$31.60 for most PBS medicines, or $7.70 with a concession card. The Australian government picks up the rest of the cost. Can the US influence the price for consumers? The US has long argued the PBS does not adequately recognise the value of developing innovative pharmaceutical products, as it focuses on demonstrating drugs provide value for money. US drug companies have recently labelled the PBS “egregious and discriminatory”. When they negotiate with the Australian government, they want to achieve higher prices they say reflects the cost of developing these drugs in the first place. They know that higher prices increases their profits. The PBS acts to keep prices low and so benefits consumers. Price negotiations are conducted between the federal government and each drug company separately for each drug. So it is difficult to see how the US government could influence these specific negotiations between a private and often global pharmaceutical company and a sovereign government. In any case, the price consumers pay is determined by the amount of subsidy from the federal government. Whether the cost of a drug to the Australian government is $50 or $5,000, consumers still play A$31.60 (or $7.70 with a concession card). It’s also difficult to see how the imposition of tariffs on Australian exports of pharmaceuticals to the US, as has been flagged, could influence the process. That’s unless these issues are caught up in some larger trade or political deal. Both Labor and the Coalition have come out defending the PBS, saying it would not be a bargaining chip in any trade war. How about drug tariffs? Then there’s the potential for tariffs on Australian pharmaceuticals exported to the US. In 2023, Australia exported US$1.06 billion worth to the US, representing 40% of its total pharmaceutical exports of about US$2.6 billion. If Trump imposes tariffs, this will increase the prices of Australian drugs sold in the US relative to US manufactured drugs. For Australian patented drugs where there are no alternatives, this would hurt US consumers whose only option would be to pay higher prices and consume less. For other drugs, demand for drugs manufactured in the US would increase, supporting its local manufacturing. The demand for drugs manufactured in Australia would fall (by how much is uncertain), creating incentives for Australian manufacturers to become more efficient. This may mean moving manufacturing overseas in the long term to countries with lower tariffs or to increase marketing efforts in other countries. But this would not necessarily create new shortages of medicines in Australia. This is because about 90% of the pharmaceuticals we use in Australia are manufactured overseas rather than being manufactured domestically. What if Australia retaliated with its own tariffs on US imported pharmaceuticals? Some 21% of our imported pharmaceuticals come from the US. Only then might tariffs influence price negotiations for listing on the PBS. This would be a bad idea for Australians’ access to innovative patented drugs. This is because there would be no other alternatives and prices would rise in negotiations, so restrictions would need to be placed on use and access. Where to now? It’s difficult to know how these trade negotiations will play out and we’ll likely be hearing more about them in coming weeks. Overall, though, it is difficult to see how the US can influence the prices that Australians pay for pharmaceuticals, especially with the recent pre-election announcement of further reductions in drug costs for patients to $25.



The PKK says it will lay down its arms. What are the chances of lasting peace between Turkey and the Kurds? Podcast
Source: theconversation    Published: 2025-03-20 10:05:11

For over 40 years, the Kurdistan Workers Party, the PKK, has waged an armed insurgency against Turkey, fighting for Kurdish rights and autonomy. But in late February, Abdullah Öcalan, the PKK’s imprisoned founder, called for the group to lay down its arms and dissolve itself. Days later, the PKK, which is labelled as a terrorist organisation by Turkey, Europe and the US, declared a ceasefire with Turkey. In this episode of The Conversation Weekly podcast, we speak to political scientist Pinar Dinc about what’s led to this moment and whether it could be the beginning of a lasting peace between Turkey and the Kurds. Despite being imprisoned in solitary confinement since his capture in 1999, Öcalan has remained a central figure in the Kurdish movement, both in Turkey and across the region. His call for the PKK to abandon its armed struggle came months after the leader of a Turkish ultra-nationalist political party launched an initiative to bring an end to the conflict. Over the past few decades, previous rounds of peace talks between the PKK and Turkey, most notably in 2009 and 2013-15, have collapsed. But Pinar Dinc, an associate professor of political science at Lund University in Sweden, says that since the Hamas-led October 7 attacks on Israel and the war in Gaza, the situation in the Middle East has rapidly changed. “It’s mutually beneficial to put an end to this war,” she says. “Both groups recognise the necessity of addressing regional tensions.” Dinc says international support for the Kurdish-led Syrian Democratic Forces in north-eastern Syria, and its Rojava revolution, means that Turkey has been forced to recognise a new “Syrian Kurdish reality”. At the same time, she says, the Kurdish movement has also reached a limit in what it can achieve in an era of modern warfare. Turkey has a huge army. It’s one of the biggest armies of Nato. Now we see increased use of drones surveillance and advanced weaponry, and I think the PKK guerrillas in the Qandil mountains, what they refer to as the medya defence zones, they’re also realising that this is getting more and more difficult. Limited discussions began in March between the Turkish government and Kurdish political parties on a way forward in peace negotiations. Dinc says this is a real opportunity for a broader reconciliation process, but there will be real challenges in the detail of what it means for Turkey’s Kurdish population. The PKK is an outcome of structural problems arising from the longstanding oppression and marginalisation of Kurds in Turkey, and addressing these root causes is essential for achieving lasting peace. Listen to the conversation with Dinc on The Conversation Weekly podcast. This episode of The Conversation Weekly was written and produced by Mend Mariwany. Sound design was by Eloise Stevens and theme music by Neeta Sarl. Gemma Ware is the executive producer. Newsclips in this episode from AP Archive, AFP News Agency, Sky News, Med TV, Gazete Duvar, DW News, Al Jazeera English and France 24 English. Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.



Grattan on Friday: Dutton says he could handle Donald Trump, but can any Australian PM?
Source: theconversation    Published: 2025-03-20 09:15:17

In the Trump age, how the next government, whether Labor or Coalition, will handle foreign affairs, defence and trade is shaping as crucially important. It’s a weird time when your friends become almost as problematic as your potential enemies, but that’s the situation we face. As many have observed, Donald Trump’s long shadow hangs over our election, at a time of multiple other uncertainties. Australia, like other countries, has already felt the brunt of the president’s tariffs policy, and the government is bracing for what may be worse to come with the next round of Trump announcements in early April. So what face would a Peter Dutton government present to the world? And how would he handle Trump? On Thursday at the Lowy Institute, the opposition leader brought his international policies together. He presented a mix of bipartisanship and differences with the government. Some of the latter weren’t so much fundamental disagreements as claims Labor had failed and the Coalition would be more competent or effective. The most frustrating part of Dutton’s speech and answers to questions was the same old problem. For crucial details, particularly on defence spending but also on the future of foreign aid under the Coalition, we were told we’d have to wait for announcements that always seem over the horizon. Dutton says as prime minister he wouldn’t resile from taking on the United States when necessary. With fears about US drug companies spearheading a war on Australia’s Pharmaceutical Benefits Scheme, he declared, “I will stand up and defend the PBS […] against any attempt to undermine its integrity, including by major pharmaceutical companies”. In arguing that, in general, he’d be able to deal with Trump, Dutton invoked the previous Coalition government’s success with Trump Mark 1 (though Mark 2 is very different), and the power of AUKUS to anchor relations. His early priority would be to visit Washington. The question Australians should ask themselves is this: “Who is better placed to manage the US relationship and engage with President Trump?” I believe that […] I will be able to work with the Trump administration Mark 2 to get better outcomes for Australia. I will talk to [Trump] about how our national interests are mutual interests. But, as he acknowledged, “Australia’s national interests do not always align perfectly with the interests of partners – even of our closest allies”. The way Trump is operating at the moment, it may be that a PM of either stripe will find him impossible on certain issues. Dutton was once an uncomplicated hawk on China. Now, he is a mix of hawkish and dovish. It’s true things have changed greatly in Australia-China relations in recent times, but another reason for Dutton’s more nuanced position is highlighted by the line in his speech that “Australia has a remarkable Chinese diaspora”. The opposition leader has an eye to the vote of Chinese-Australians. Dutton now walks a line that is critical of China militarily, but anxious to promote and expand the now-restored trading relationship. Bianca De Marchi/AAP Currently, there are two major, hot conflicts in the world: the Ukraine war and the violence in the Middle East. On Ukraine, the Coalition and Labor are at one in their backing for President Volodymyr Zelensky, although Dutton criticises aspects of the government’s delivery of support. But they are at odds over Prime Minister Anthony Albanese’s willingness to contribute to a peacekeeping force. “Australia can’t afford the multibillion-dollar sustainment price tag for having troops based in an ill-defined and endless European presence,” Dutton said. The “multibillion-dollar” price tag was overegged, but many would agree there are sound arguments for not deploying Australian forces on such a venture. On the other hand, if an Albanese government did so, you can bet the commitment would be relatively token. The big gulf between Labor and Coalition is over the Middle East. This has grown from a marginally different reaction after the October 2023 Hamas attack on Israelis to a major disagreement now. Dutton claims Labor “has viewed our relationship with Israel through a domestic policy lens and with a view to its political imperatives” – that is, the Muslim vote. Based on what Dutton says, a change of government would bring a substantial recalibration of Australia’s Middle East policy. One of Dutton’s “first orders of business” would be to call Israeli Prime Minister Benjamin Netanyahu to “help rebuild the relationship Labor has trashed”. He added: Israel will be able to count on our support again in the United Nations. And given UNRWA [the Palestinian relief agency] has employed terrorists from Hamas who participated in the 7 October attacks, the organisation will no longer receive funding from a government I lead. The Coalition repeatedly says Australia needs to spend more on defence. It has announced $3 billion to reinstate the fourth squadron of F-35 joint strike fighters, but not said the size of the defence envelope it believes is required. Dutton said: We need to do nothing short of re-thinking defence, re-tooling the ADF, and re-energising our domestic defence industry, and that’s exactly what our government will do. That sounds like a massive task, and so it’s more than time we saw the plan and cost of it. Would the Coalition be willing to go to around 3% of gross domestic product (GDP) on defence spending, as the Trump administration wants? That would require a lot of sacrifice in other policy areas. The Australian Financial Review this week reported Coalition sources saying it is weighing up boosting defence spending to at least 2.5% by 2029. When the Coalition talks up its record in defence, one should also remember the failures, chief among them the delays and chopping and changing in its submarine program. A sub-optimal performance has been bipartisan. Dutton was questioned on his position on aid to Pacific countries. Should Australia step up given the void left by the US shutting down aid? If a Dutton government did that, would it mean an overall aid increase, or cuts in the aid budget elsewhere? This was left as another black hole, although he did say the Australian government should make representations to the US for the reinstatement of particular aid programs the US had cut. I don’t agree with some of the funding that they’ve withdrawn, and I think it is detrimental to the collective interests in the region, and I hope that there can be a discussion between our governments about a sensible pathway forward in that regard. Good luck with that. It is hard to avoid the conclusion the overall aid program would be an easy target for the Coalition in the search for savings. When leaders talk, what they don’t say can be as important as what they do.



Wildfires and farm fertilizer use are fueling ozone pollution
Source: sciencenews    Published: 2025-03-20 09:00:00

Images of California’s wildfires this winter speak for themselves about the fires’ devastating effects. But those pictures don’t tell the whole story. Together with soil emissions, the fires are driving an increase in ground-level ozone pollution — causing a fundamental shift in our atmosphere’s chemistry, researchers say, and potentially rendering air pollution standards unmeetable. “We’re entering a new air pollution regime,” says Ian Faloona, an atmospheric chemist at University of California, Davis. Analyzing satellite data and ground-level observations, Faloona and his colleagues have teased apart the sources that contribute to ozone in major air basins in the southwestern United States. Soil and wildfire emissions of nitrogen-containing ozone precursors, collectively referred to as “NOx,” are increasingly raising ozone levels, the team found. These NOx emissions levels are now comparable with those from such human-made sources as cars and factories throughout the southwestern United States, Faloona says. He reported his initial findings January at the American Meteorology Society’s annual meeting in New Orleans. Ground-level ozone typically comes from other primary pollutants that react with sunlight and stagnant air. It has been linked to adverse health effects, including increased respiratory illness, reproductive problems, premature death and some cancers. That’s why it is among six main air pollutants that the U.S. Environmental Protection Agency has regulated since the 1970s. Over time, the standard for ozone has been ratcheted down, most recently in 2015; it’s now 70 parts per billion over an eight-hour average. But “estimates of future emissions are overlooking an immense source from agricultural emissions, and wishing away wildfires,” Faloona says. While regulations have limited NOx production by human-made sources, particularly in urban areas, satellite data since 2015 began to show rising NOx levels in remote areas of California. Faloona found patterns linked with an alarming rise in recent wildfire activity and increasing soil emissions due to a warming climate and rising fertilizer use. The findings come as wildfires have ravaged areas coast-to-coast in the United States, from January’s devastating fires in Los Angeles to more recent conflagrations in South Carolina and Long Island, N.Y. Leveling off While ozone levels in different California air basins have dropped over the past few decades, they remain persistently above the EPA standard (red dashed line) for ambient air quality. That includes in both urban (San Diego) and more agricultural areas (like the Sacramento Valley). A new study teases out how wildfires and agricultural practices may be contributing to the problem. Ozone levels in select California locations, 2000–2023 Ian Faloona Previous research has shown how wildfire smoke wafting over cities can jump-start ozone production. And Dan Jaffe, a climatologist at the University of Washington in Bothell, Wash., recently showed that the number of days that exceed national air quality ozone thresholds doubles during high wildfire years. But how much wildfire smoke, along with fertilizer emissions, contributed to the problem was unknown. Sponsor Message Faloona developed a method to derive how much of the ozone came from various sources, and found a fundamental shift. A steady decrease over the past z several decades has now stalled. The vast majority of ozone — 64 to 70 ppb — still wafts in from the Pacific Ocean from sources beyond U.S. borders, as it has since the 1990s. Meanwhile, now-regulated automobile and industrial sources, which once accounted for as much as 15 to 20 ppb in mid-sized cities, now contribute under 6 ppb in most urban areas (excluding megapolises like Los Angeles). Wildfire and soil impacts boost ozone by another 1 to 7 ppb, he found, or up to 50 percent of the excess ozone. In a follow-up study focused on one air basin free of wildfire impacts, he found that some 2 ppb of NOx in the air came from agricultural fertilizers. Those numbers might not sound like much. But when it comes to trying to stay below 70 ppb, every bit counts. What emerges is that unregulated sources of ozone precursors from wildfires and agricultural soils are presently contributing as much to most urban areas in the U.S. Southwest as are traditional anthropogenic sources. Yet some of that data aren’t always figured into efforts to combat ozone. For instance, for states calculating ozone compliance, the EPA offers a mechanism to exclude data that came from exceptional events — like wildfires. Demonstrating that a day was influenced by smoke is so complicated that states rarely invoke the rule. “If you’re holding the wrong person accountable for pollution they didn’t cause, our system breaks down,” Jaffe says.



How can you tell if your child’s daycare is good quality?
Source: theconversation    Published: 2025-03-20 06:15:31

This week, we heard claims of shocking abuse and neglect in Australian childcare centres on ABC’s Four Corners program. While 91% of services met or exceeded the national standards as of February 2025, there have also been reports of centres operating with unqualified staff, abusive practices and nutritionally substandard food. How can you tell if your child is going to a good quality childcare service? Read more: Amid claims of abuse, neglect and poor standards, what is going wrong with childcare in Australia? What are the standards? Australian’s childcare regulator – the Australian Children’s Education and Care Quality Authority or ACECQA – oversees national quality standards for early childhood education and care. Services are assessed and given a rating across seven areas including the staffing, children’s health and safety and the educational program. The ratings note whether services are “exceeding”, “meeting” or “working towards” the national standards. In some cases, they may note “significant improvement [is] required”. These ratings are public (you can search the national register of services) and are a useful starting point for parents. However, they may not reflect the current situation in a service. As the Productivity Commission noted, many services assessed as “meeting” the national standards (which comprise the bulk of the sector) have a gap of more than four years between assessments. Services with lower ratings are reassessed more frequently. But there are other ways for parents to assess the quality of their child’s early childhood education. Read more: We need more than police checks: how parents and educators can keep childcare services safe from abuse Do educators want to work there? If early childhood educators want to work at your childcare service, this is a strong sign it is a good quality service. One of the major issues in the early childhood sector is staff retention. Excessive workloads, not being valued by employers and poor pay are some of the reasons early childhood educators leave their jobs. This is a huge problem, because high-quality staff are key to providing high-quality education and care, built on strong, stable relationships with children. If you are considering a service, a key question to ask is how long educators have been working there? How often do they have to replace staff? If you are already at a service, ask yourself, are there consistent staff at drop off/pick up? Are there familiar relief educators to cover absences? Or is there unexplained high turnover? As a bottom line, all educators should be warm and caring and get to know every child and their family. What is the centre itself like? Some daycare centres market themselves to parents by offering a “barista made” coffee in the morning, yoga classes and designer interiors. While this might appeal to adult tastes, it is important to think about whether the centre is set up to be suitable and fun for children. For example: is there space to play outside, with natural materials? It is recommended toddlers and preschoolers are physically active for at least three hours per day are there plenty of different play options to appeal to different interests and different children? Or does nothing seem to be organised? are toys and equipment in good condition? Are pencils sharpened and ready to use? Are there puzzle pieces missing? Read more: Real dirt, no fake grass and low traffic – what to look for when choosing a childcare centre What about the activities and educational program? In Australia, centres need to provide play-based learning opportunities, which support children’s wellbeing, learning and development. This is not about teaching children to read and do algebra before they start school. It is about supporting children to have positive play experiences, so the associated learning is fun and leaves children wanting to know (and do) more. Services should provide children with lots of opportunities to explore in age-appropriate ways. For example, toddlers may have a sandpit with multiple tools and toys. Three- and four-year-olds may work on projects, such as building kites, or go on excursions in their local community. Educators should be involved in this play. Sometimes they may act as a partner, helping to extend children’s imaginations. Other times, they may support from the sideline, encouraging a child to climb to a higher part of the climbing frame than yesterday. They should not be telling children what to do all the time. It’s important for children to be given the time and space to test out their theories about how the world works. Some things to look out for include: is there “cookie cutter” art (where every piece of children’s art looks the same) on the wall? Or are children given the chance to express their creativity? can toys be used in more than one way, in different areas (to encourage children’s agency)? Or are toys required to be kept in certain places? can educators talk about the different things they are doing to stimulate and extend children’s play and interests? Families should also receive clear, regular communication about their child’s development and progress. If there are issues with behaviour, the centre should provide evidence-based support that respects the rights and dignity of children (rather than punishing or shaming them). Finally, does your child seem to have fun at childcare? Provided there are no other issues (such as separation anxiety), do they want to go and see their educators and friends? This is a good sign of a quality service that is building children’s sense of belonging. Need more information? If you have any concerns or need more information, try talking to your centre director first. Alternatively, you can contact the regulatory authority in your state or territory.



Cosmic dark energy may be weakening, astronomers say, raising questions about the fate of the universe
Source: theconversation    Published: 2025-03-20 03:46:50

The universe has been expanding ever since the Big Bang almost 14 billion years ago, and astronomers believe a kind of invisible force called dark energy is making it accelerate faster. However, new results from the Dark Energy Spectroscopic Instrument (DESI), released today, suggest dark energy may be changing over time. If the result is confirmed, it may overturn our current theories of cosmology – and have significant consequences for the eventual fate of the universe. In extreme scenarios, evolving dark energy could either accelerate the universe’s expansion to the point of tearing it apart in a “Big Rip” or cause it to collapse inward in a “Big Crunch”. As a member of the DESI collaboration, which includes more than 900 researchers from 70 institutions worldwide, I have been involved in the analysis and interpretation of the dark energy results. A new picture of dark energy First discovered in 1998, dark energy is a kind of essence that seems to permeate space and make the universe expand at an ever-increasing rate. Cosmologists have generally assumed it is constant: it was the same in the past as it will be in the future. The assumption of constant dark energy is baked into the widely accepted Lambda-CDM model of the universe. In this model, only 5% of the universe is made up of the ordinary matter we can see. Another 25% is invisible dark matter than can only be detected indirectly. And by far the bulk of the universe – a whopping 70% – is dark energy. DESI’s results are not the only thing that gives us clues about dark energy. We can also look at evidence from a kind of exploding stars called Type Ia supernovae, and the way the path of light is warped as it travels through the universe (so-called weak gravitational lensing). Measurements of the faint afterglow of the Big Bang (known as the cosmic microwave background) are also important. They do not directly measure dark energy or how it evolves, but they provide clues about the universe’s structure and energy content — helping to test dark energy models when combined with other data. When the new DESI results are combined with all this cosmological data, we see hints that dark energy is more complicated than we thought. It seems dark energy may have been stronger in the past and is now weakening. This result challenges the foundation of the Lambda-CDM model, and would have profound implications for the future of the universe. How DESI maps the universe The DESI project is based at the Kitt Peak National Observatory in Arizona. Its goal is to create the most extensive 3D map of the universe ever made. To do this, it uses a powerful spectroscope to precisely measure the frequency of light coming from up to 5,000 distant galaxies at once. This lets astronomers determine how far away the galaxies are, and how fast they are moving. By mapping galaxies, we can detect subtle patterns in their large-scale distribution called baryon acoustic oscillations. These patterns can be used as cosmic rulers to measure the history of the universe’s expansion. Marilyn Sargent / Berkeley Lab By tracking these patterns over time, DESI can map how the universe’s expansion rate has changed. DESI is only halfway through a planned five-year survey of the universe, releasing data in batches as it goes. The new results are based on the second batch of data, which includes measurements from more than 14 million galaxies and brightly glowing galactic cores called quasars. This dataset spans a cosmic time window of 11 billion years — from when the universe was just 2.8 billion years old to the present day. New data, new challenges The new DESI results represent a major step forward compared with what we saw in the first batch of data. The amount of data collected has more than doubled, which has improved the accuracy of the measurements and made the findings more reliable. Results from the first batch of data gave a hint that dark energy might not behave like a simple cosmological constant — but it wasn’t strong enough to draw firm conclusions. Now, the second batch of data has made this evidence stronger. The strength of the results depends on which other datasets it is combined with, particularly the type of supernova data included. However, no combination of data so far meets the typical “five sigma” statistical threshold physicists use as the marker of a confirmed new discovery. The fate of the universe Still, the fact this pattern is becoming clearer with more data suggests that something deeper might be going on. If there is no error in the data or the analysis, this could mean our understanding of dark energy – and perhaps the entire standard model of cosmology – needs to be revised. If dark energy is changing over time, it could have profound implications for the ultimate fate of the universe. If dark energy grows stronger over time, the universe could face a “Big Rip” scenario, where galaxies, stars, and even atoms are torn apart by the increasing expansion rate. If dark energy weakens or reverses, the expansion could eventually slow down or even reverse, leading to a “Big Crunch”. What’s next? DESI aims to collect data from a total of 40 million galaxies and quasars. The additional data will improve statistical precision and help refine the dark energy model even further. Future DESI releases and independent cosmological experiments will be crucial in determining whether this represents a fundamental shift in our understanding of the universe. Future data could confirm whether dark energy is indeed evolving – or whether the current hints are just a statistical anomaly. If dark energy is found to be dynamic, it could require new physics beyond Einstein’s theory of general relativity and open the door to new models of particle physics and quantum gravity.



If NZ wants to decarbonise energy, we need to know which renewables deliver the best payback
Source: theconversation    Published: 2025-03-20 02:06:17

A national energy strategy for Aotearoa New Zealand was meant to be ready at the end of last year. As it stands, we’re still waiting for a cohesive, all-encompassing plan to meet the country’s energy demand today and in the future. One would expect such a plan to first focus on reducing energy demand through improved energy efficiency across all sectors. The next step should be greater renewable electrification of all sectors. However, questions remain about the cradle-to-grave implications of investments in these renewable resources. We have conducted life-cycle assessments of several renewable electricity generation technologies, including wind and solar, that the country is investing in now. We found the carbon and energy footprints are quite small and favourably complement our current portfolio of renewable electricity generation assets. Meeting future demand The latest assessments provided by the Ministry of Business, Employment and Innovation echo earlier work by the grid operator Transpower. Both indicate that overall demand for electricity could nearly double by 2050. Many researchers believe these scenarios are an underestimate. One study suggests the power generation capacity will potentially need to increase threefold over this period. Other modelling efforts project current capacity will need to increase 13 times, especially if we want to decarbonise all sectors and export energy carriers such as hydrogen. This is, of course, because we want all new generation to come from renewable resources, with much lower capacity factors (the percentage of the year they deliver power) associated with their variability. Additional storage requirements will also be enormous. Following the termination of work on a proposed pumped hydro project, other options need investigating. Building renewable generation The latest World Energy Outlook published by the International Energy Agency (IEA) shows that wind and solar, primarily photovoltaic panels, are quickly taking over as the primary renewable technologies. This is also true in Aotearoa New Zealand. An updated version of the generation investment survey, commissioned by the Electricity Authority, shows most of the committed and actively pursued projects (to be commissioned by 2030) are solar photovoltaic and onshore wind farms. Offshore wind projects are on the horizon, too, but have been facing challenges such as proposed seabed mining in the same area and a lack of price stabilisation measures typical in other jurisdictions. New legislation aims to address some of these challenges. Distributed solar power (small-scale systems to power homes, buildings and communities) has seen near-exponential growth. Our analysis indicates wind (onshore and offshore) and distributed solar will make an almost equal contribution to power generation by 2050, with a slightly larger share by utility-scale solar. Cradle-to-grave analyses The main goal is to maintain a stable grid with secure and affordable electricity supply. But there are other sustainability considerations associated with what happens at the end of renewable technologies’ use and where their components come from. The IEA’s Global Critical Minerals Outlook shows the fast-growing global demand for a suite of materials with complex supply chains. We have also investigated the materials intensity of taking up these technologies in Aotearoa New Zealand, and discussed the greater dependence on those supply chains. The challenges in securing these metals in a sustainable manner include environmental and social impacts associated with the mining and processing of the materials and the manufacturing of different components that need to be transported for implementation here. There are also operating and maintenance requirements, including the replacement of components, and the dismantling of the assets in a responsible manner. We have undertaken comprehensive life-cycle assessments, based on international standards, of the recently commissioned onshore Harapaki wind farm, a proposed offshore wind farm in the South Taranaki Bight, a utility-scale solar farm in Waikato and distributed solar photovoltaic systems, with and without batteries, across the country. The usual metrics are energy inputs and carbon emissions because they describe the efficiency of these technologies. They are considered a first proxy of whether a technology is appropriate for a given context. Beyond that, we used the following specific metrics, as summarised in the table below: GWP: global warming potential (carbon emissions during a technology’s life cycle per energy unit delivered). CPBT: carbon payback time (how long a technology needs to be operational before its life cycle emissions equal the avoided emissions, either using the grid and its associated emissions or conventional natural gas turbines). CED: cumulative energy demand over the life cycle of a technology. EPBT: energy payback time (how long a technology needs to be operational before the electricity it generates equals the CED). EROI: energy return on investment (the amount of usable energy delivered from an energy source compared to the energy required to extract, process and distribute that source, essentially quantifying the “profit” from energy production). There is much debate about the minimum energy return on investment that makes an energy source acceptable. A value of more than ten is generally viewed as positive. Te Herenga Waka Victoria University of Wellington , CC BY-SA For all technologies we assessed, the overall greenhouse gas emissions are lower than the grid emissions factor. Because of New Zealand’s already low-emissions grid, the carbon payback time is around three to seven years for utility-scale generation. But for small-scale, distributed generation it can be up to 13 years. If the displacement of gas turbines is considered, the payback is halved. Energy return on investment is above ten for all technologies, but utility-scale generation is better than distributed solar, with values of between 30 and 75. To put this into perspective, the energy return on investment for hydropower, if operated for 100 years, is reported to be 110. Utility-scale wind and solar being commissioned now have an operational life of 30 years but are typically expected to be refurbished. This means their energy return on investment is becoming comparable to hydropower.



Stinky penguin poop strikes fear into the hearts of Antarctic krill
Source: sciencenews    Published: 2025-03-20 00:15:00

The foul stench of penguin poop sets Antarctic krill on edge. In lab experiments, the mere scent of penguin droppings — or guano — sent krill scrambling for escape, researchers report March 20 in Frontiers in Marine Science. The stink also seemed to suppress krill’s appetites. Antarctic krill (Euphausia superba) form a cornerstone of the Southern Ocean ecosystem. “They’re the main food source for all of the big, charismatic fauna,” says Nicole Hellessey, an Antarctic marine scientist at the University of Tasmania in Hobart. Whales, penguins and seals all eat — or eat things that eat — krill. Understanding the critters’ movements could help identify key areas for marine conservation. Krill use their antennae to sniff out food, mates and even pollution. But scientists weren’t sure if they could detect predators by scent. To find out, Hellessey and her colleagues netted krill off the Antarctic Peninsula and transported them to nearby Palmer Station. In the lab, the team let krill loose in a flume filled with flowing seawater, adding either algae for the krill to eat, a bit of Adélie penguin poop or both. Cameras tracked the krill’s 3-D movement. Working with the krill was fun, says oceanographer David Fields of the Bigelow Laboratory for Ocean Sciences in East Boothbay, Maine. “They truly are these cute, charismatic animals.” Working with penguin poop was another story. “Penguin crap is the most vile thing you can imagine,” Fields says. Just opening guano storage bags “would clear the entire lab space.” Krill showed a similar reaction to the smell. In algae-only water, they quickly swam toward the food then lingered near the buffet. But in water with algae and guano, the krill zigzagged, Hellessey says. “They’d sort of dart in, eat and dart out.” Krill swam in frantic zigzags in water containing only penguin poop, too. A second set of experiments placed krill in seawater buckets with either algae or algae plus guano. Over 22 hours, krill in the algae-only buckets ate about 67 percent of the food. Krill in buckets with penguin poop ate only about 25 percent. The scientists aren’t sure what aromas in penguin feces the krill are reacting to. But since Adélie penguins’ diets are over 99 percent krill, “a lot of that guano would have crushed-up krill sort of scents,” Hellessey suspects. Some chemical cue might make the krill go, “‘Oh my god, my buddy’s hurt, I shouldn’t go over there.’”



Woodside’s bid to expand a huge gas project is testing both major parties’ environmental credentials
Source: theconversation    Published: 2025-03-20 00:06:56

Opposition Leader Peter Dutton has indicated a Coalition government would quickly approve a giant gas project off Western Australia which will release billions of tonnes of greenhouse gases until around 2070. Woodside Energy is leading the joint venture, which would dramatically expand offshore drilling and extend gas production at the North West Shelf project – already Australia’s largest gas-producing venture. In a statement on Wednesday, Dutton said a Coalition government would “prioritise Western Australian jobs and the delivery of energy security” by directing environment officials to fast-track assessment of the extension, later saying “we will make sure that this approval is arrived at in 30 days”. Federal Environment Minister Tanya Plibersek is currently considering the proposal. Mining and business interests have been pushing her to make a decision this month. Dutton’s support for the project is deeply concerning. Evidence suggests extending the project would undermine global efforts to curb carbon emissions and stabilise Earth’s climate. The extension also threatens significant Indigenous sites and pristine coral reef ecosystems. Federal approval of the project puts both natural and heritage assets at risk. Lukas Coch/AAP What’s this debate all about? The North West Shelf project supplies domestic and overseas markets with gas extracted off WA’s north coast. The project currently comprises offshore extraction facilities and an onshore gas-processing plant at Karratha. Its approval is due to expire in 2030. Robert Garvey/Woodside via AAP Woodside’s proposed extension would allow the project to operate until 2070. It would also permit expanded drilling in new offshore gas fields and construction of a new 900km underwater gas pipeline to Karratha. In 2022, the WA Environment Protection Authority recommended a 50-year extension for the plant, if Woodside reduced its projected emissions by changing its operations or buying carbon offsets. This paved the way for the state government approval in December last year. Gas: a major climate culprit Under the 2015 Paris Agreement, the world is aiming to keep planetary heating to no more than 1.5°C above the pre-industrial average. Greenhouse gas emissions must fall to net zero to achieve the goal. But instead, global emissions are rising. Greenhouse gases – such as methane, nitrogen oxide and carbon dioxide – are emitted throughout the gas/LNG production process. This includes when gas is extracted, piped, processed, liquefied and shipped. Emissions are also created when the gas is burned for energy or used elsewhere in manufacturing. Australian emissions increased 0.8% in 2022–23 – and coal and gas burning were the top contributors. However, Australia’s greatest contribution to global emissions occurs when our coal and gas is burned overseas. The North West Shelf project is already a major emitter of greenhouse gases. The proposed extension would significantly increase the project’s climate damage. Woodside estimates the expansion will create 4.3 billion tonnes of greenhouse gases over its lifetime. Greenpeace analysis puts the figure much higher, at 6.1 billion tonnes. Increasing greenhouse gas emissions at this magnitude, when the window to climate stability is fast closing, threatens major damage to Earth’s natural systems, and human health and wellbeing. NASA EARTH OBSERVATORY Woodside says it will use carbon-capture and storage to reduce emissions from the project. This technology is widely regarded as unproven at scale. Indeed, it has a history of delays and underperformance in similar gas operations in WA. Woodside proposes to reduce the project’s climate impacts by buying carbon offsets. This involves compensating for a company’s own emissions by paying for cuts to greenhouse gas emissions elsewhere, through activities such as planting trees or generating renewable energy. However, there are serious doubts over whether carbon offset projects deliver their promised benefits. Aaron Bunch/AAP Threats to marine life and Indigenous heritage Damage from the proposal could extend beyond climate harms. The approval would enable increased drilling in the Browse Basin, including around the pristine Scott Reef. The reef is home to thousands of plant and animal species. Scientists say the project threatens migrating whales and endangered turtles, among other marine life. Also, the onshore infrastructure is located near the 50,000-year-old Murujuga rock art precinct on the traditional lands of five Aboriginal custodial groups. The site contains more than one million petroglyphs said to depict more than 50,000 years of Australian Indigenous knowledge and spiritual beliefs. Traditional Owners suffered severe cultural loss in the 1980s when about 5,000 rock art pieces were damaged or removed during construction of Woodside’s gas plant. The Traditional Owners and scientists fear increased acid gas pollution from the proposed expansion will further damage the rock art. Acting in Australia’s interests The Albanese government has failed to deliver its promised reform of Australia’s national environment laws. This means nature lacks the strong laws needed to protect it from harmful development. At federal, state and territory levels, both major parties support expansion of the gas industry. This takes the form of policy inertia, tax breaks and subsidies for the fossil fuel industry. In the current term of government, Plibersek has green-lit numerous polluting projects. This includes approving several coal mine expansions last year. What’s more, Australian governments support offshore gas developments in the Tiwi Islands, new onshore shale gas extraction in the Northern Territory and the Kimberley and a new coal seam gas pipeline and wells in Queensland. Approval of the North West Shelf expansion is not in the best interests of Australia and future generations. No federal government should prioritise short-term economic gain over Earth’s climate and human health.



Spellements: Thursday, March 20, 2025
Source: scientificamerican    Published: 2025-03-20 00:00:00

How to Play Click the timer at the top of the game page to pause and see a clue to the science-related word in this puzzle! The objective of the game is to find words that can be made with the given letters such that all the words include the letter in the center. You can enter letters by clicking on them or typing them in. Press Enter to submit a word. Letters can be used multiple times in a single word, and words must contain four letters or more for this size layout. Select the Play Together icon in the navigation bar to invite a friend to work together on this puzzle. Pangrams, words which incorporate all the letters available, appear in bold and receive bonus points. One such word is always drawn from a recent Scientific American article—look out for a popup when you find it! You can view hints for words in the puzzle by hitting the life preserver icon in the game display. The dictionary we use for this game misses a lot of science words, such as apatite and coati. Let us know at games@sciam.com any extra science terms you found, along with your name and place of residence, and we might give you a shout out in our daily newsletter!



Spellements: Thursday, March 20, 2025
Source: scientificamerican    Published: 2025-03-20 00:00:00

How to Play Click the timer at the top of the game page to pause and see a clue to the science-related word in this puzzle! The objective of the game is to find words that can be made with the given letters such that all the words include the letter in the center. You can enter letters by clicking on them or typing them in. Press Enter to submit a word. Letters can be used multiple times in a single word, and words must contain four letters or more for this size layout. Select the Play Together icon in the navigation bar to invite a friend to work together on this puzzle. Pangrams, words which incorporate all the letters available, appear in bold and receive bonus points. One such word is always drawn from a recent Scientific American article—look out for a popup when you find it! You can view hints for words in the puzzle by hitting the life preserver icon in the game display. The dictionary we use for this game misses a lot of science words, such as apatite and coati. Let us know at games@sciam.com any extra science terms you found, along with your name and place of residence, and we might give you a shout out in our daily newsletter!



How Did Multicellular Life Evolve?
Source: quantamagazine    Published: 2025-03-20 00:00:00

At first, life on Earth was simple. Cells existed, functioned and reproduced as free-living individuals. But then, something remarkable happened. Some cells joined forces, working together instead of being alone. This transition, known as multicellularity, was a pivotal event in the history of life on Earth. Multicellularity enabled greater biological complexity, which sparked an extraordinary diversity of organisms and structures. How life evolved from unicellular to multicellular organisms remains a mystery, though evidence indicates that this may have occurred multiple times independently. To understand what could have happened, Will Ratcliff at Georgia Tech has been conducting long-term evolution experiments on yeast in which multicellularity develops and emerges spontaneously. In this episode of The Joy of Why podcast, Ratcliff discusses what his “snowflake yeast” model could reveal about the origins of multicellularity, the surprising discoveries his team has made, and how he responds to skeptics who question his approach. Listen on Apple Podcasts, Spotify, TuneIn or your favorite podcasting app, or you can stream it from Quanta All episodes Your browser does not support the audio element. / APPLE SPOTIFY Transcript [Theme plays] STEVE STROGATZ: Hi Janna. Great to see you. JANNA LEVIN: Hey Steve, how you doing out there? STROGATZ: Good. Welcome, this is Season Four. We’re back! LEVIN: We’re back. Looking forward to this. STROGATZ: Yeah, me too. This is gonna be a really exciting season and I’m so thrilled that we’re doing it together. LEVIN: Yeah. And you’re kicking it off this season. You have the first episode. STROGATZ: Yeah, so I did. And the topic was one I had never thought about before, I wonder if you’ve run across it. It’s the question of the origin of multicellularity. LEVIN: Weirdly, I have thought about this. STROGATZ: You have? LEVIN: Well, I found it fascinating that single-celled organisms waffled for so long on the Earth. And that just nothing was happening for a very, very long time, billions of years. And then something finally happened. I always thought that was just remarkable. STROGATZ: But, so, I think of you thinking more about, like, black holes, space time, astrophysical stuff, but why are you thinking about this? LEVIN: Because science is fascinating. I like the science that other people are doing too. And sometimes I just wanna hear about it. You know, I muse about things that I don’t plan on working on necessarily. STROGATZ: Okay, I see. So not from some astrobiology, life-on-other-planet type. LEVIN: Not yet. Not yet anyway. STROGATZ: Huh. But you make the point about waffling. That single-celled critters, like we had bacteria, maybe cyanobacteria in the oceans, taking them a long time to get their act together to go multicellular. And you said you wondered why it took so long? LEVIN: Yeah. Right, I mean if you ask about astrobiology, is that happening on other planets? It’s just taken a really long time, and they’re just single-celled organisms floating around out there? STROGATZ: Right, what took so long? LEVIN: Yeah. STROGATZ: And did it only happen just once? And apparently, and this came as a shocker to me, it did not just happen once, it happened something like 50 times independently. LEVIN: That’s shocking. STROGATZ: Yeah, why wasn’t I informed? LEVIN: Yeah, why am I the last to know? STROGATZ: Well, I think when we were in high school and they were teaching us biology, they didn’t know that. But it’s now understood that, you know, in all these different kingdoms or whatever they call them in biology — so whether it’s animals, plants, fungi — they all figured out their own way to do it, to go multicellular. But in any case, one question then is how does a unicellular organism manage to make this transition, in any of these cases? I mean, there’s the historical question of ‘How did it happen?’, but what’s so amazing and really very courageous about our guest — Will Ratcliff is his name, he’s a biologist at Georgia Tech — is that he wants to do this in the lab. He wants to induce a multicellularity transition in a single-celled organism that we’ve all heard of — yeast — like the yeast in making beer or bread rising, whatever, which normally lives as a eukaryotic, single-celled organism. He has found a way to get them to act multicellular, to clump together into… Are they a colony? Are they trying to be a multicellular organism in their own right? LEVIN: Well, I really hope that stays in the lab. STROGATZ: You don’t wanna see that thing coming at you. LEVIN: Unleashed. STROGATZ: Coming at you on the street. LEVIN: I don’t want it coming out of my kitchen sink drain, you know, like one of those crazy cyclops fungi. STROGATZ: Well, we’re not there yet. I can tell you. That’s not where the episode is going. But as we’ll hear from Will, it is controversial. There are colleagues of his who feel what he’s doing is irrelevant to the history of life on Earth, that he’s just doing something in the lab, and it may be telling us very little about what happened in real biology. Whereas other people think, it’s fundamental mechanisms that he’s getting at. It’s opening up a realm of possibilities for us to explore. Some may have occurred, some may not have occurred, historically. But, still, it shows us what biology is capable of. So, um, you ready for Will Ratcliff? LEVIN: Fantastic. I’m ready. Let’s do it. STROGATZ: Okay. Let’s do it. [music] Will Ratcliff STROGATZ: Oh, hey there, Will. WILL RATCLIFF: Hey Steve, how’s it going? STROGATZ: Good. I’m really excited to have you on the show today. Can we begin by talking about your hobby farm? You know, I have to admit, I’m not sure I know what a hobby farm really is, or what happens there. RATCLIFF: I think it mainly means that we spend much more money than we would ever gain from any proceeds from the farm. We have goats. We have chickens, which lay more eggs than we can eat. We have peacocks, which haven’t hit maturity yet, so my neighbors are still okay with them. The males, I think, make a like a call that is like a “ah-AH-ah”, but you know, a hundred decibels or more. And, uh, we’ll see. We may be getting rid of those. STROGATZ: Some natural selection there. RATCLIFF: Indeed. STROGATZ: So, in addition to raising animals and plants though, you do, as we’re going to be talking about today, raise yeast. But before we get to that, could we just talk about, more broadly, the question of unicellular life versus multicellular life? What are some of the basic characteristics of each type? RATCLIFF: Yeah, so, you know, life on Earth has a very long history. It evolved around three-and-a-half billion years ago. And by then, we had honest-to-goodness cells, with the things that you’ve probably learned about in your high school biology class, right. They have a nucleus, which contains the DNA that encodes the genetic information that the cells use to perform their basic functions that, you know, then makes proteins that are the action parts of a cell. And so, cells are these fantastic biological machines, right, in which you have this concentrated soup of highly functional macromolecules. Now, life wasn’t always cellular. Cells are like one of these great innovations of life. And once sort-of cells evolved, they really took off, and it has been the sort-of basic building block of life for the last three-and-a-half billion years. Multicellular organisms are a kind of organism that is built upon the basis of cells, but where many cells live within one group and function essentially collectively. So, we are a multicellular organism, we contain approximately 40 trillion cells, which divide labor and perform all these various functions to allow us to do things in the multicellular, you know, environment — run around, have eyes, see things, talk on podcasts — that wouldn’t be possible for single-celled organisms, right? So, the evolution of multicellularity is a way of increasing biological complexity by taking what were formerly free-living individuals and turning them into parts of a new kind of individual: a multicellular organism. And it’s evolved, not once or twice, but many times. We don’t really have a great number, because we keep discovering more, actually. But there’s at least 50 independent transitions to multicellularity that we know of. STROGATZ: Whoa! That’s not something I remember hearing in my high school biology class. That’s something we only figured out, what, in the past few decades? RATCLIFF: Uh, yeah, I think it’s been a gradually increasing number. But I think as people, we tend to be very animal-centric, but then there’s a whole slew of things that are a little bit more esoteric. There’s cellular slime molds that live on land that, you know, move around like a slug, and then will grow as single cells and come together, like a transformer, to then do something as a group, you know. So, there’s different flavors of multicellularity that have evolved in different lineages. And I think partly we’ve known about this for a while, but especially as we develop the tools to understand bacteria and archaea — the big domains of single-cell life that have been around for a very long time — we’re finding more and more types of multicellular bacteria and archaea that we just didn’t know existed, because, unless you’re looking at them with a high-powered microscope or using other advanced techniques, you can’t just see it, right? STROGATZ: So, one thing I was wondering about here is dates. RATCLIFF: We have reasons to think that cellular life exists around three-and-a-half billion years ago, and Earth is only four-and-a-half billion years old total. So, it’s fairly early in Earth’s, you know, history as a planet. But it probably happened earlier, and by that time they’ve already done the things that are required to evolve cells, and have all these basic building blocks of life, like DNA, which contains the, sort-of, code of the organism. STROGATZ: Good. Yeah, this is very helpful, because there are so many interesting transitions to talk about, each of them being astonishing. You know, the origin of life from non-life would be one. But the very famous one that everybody hears about is the Cambrian explosion. And, if I’m hearing you right, that is not quite what we’re talking about. RATCLIFF: It’s one of the transitions. Well, let’s put it this way. The evolution of multicellularity is broader than just animals. It’s a process, through which lineages that are single-celled can form groups, which then become units of adaptation. Evolutionary units that can get more complex through, you know, natural selection. And the Cambrian explosion is an incredible period where animals, which had already been around for probably 100 million years or more, just start to figure out all of these innovations which are hallmarks of extant animals. Before the Cambrian explosion, things were soft and gelatinous and didn’t have eyes or skeletons. It’s questionable if they had brains. They don’t have any of these things. And then in a relatively short period of time, just a few tens of millions of years, all of these things show up. And we think it’s probably due to these, like, ecological arms races, where you have predators attacking prey. The prey start evolving defensive mechanisms. So, you know, you have just this explosion of animal complexity in what appears to be a very short period of time in geological terms. STROGATZ: But that Cambrian explosion, when the animals start to figure out all these evolutionary innovations, that’s later, right? Any estimate of how much later that is than this first appearance of multicellularity? RATCLIFF: Great question. So, the interesting thing about multicellularity, it’s evolved in very different time periods and different lineages. So, cyanobacteria were evolving multicellularity with honest-to-goodness development and cell differentiation around 3 billion years ago. It doesn’t take that long after you get cells that you start to get multicellular organisms evolving. So, the red algae, which are a seaweed, they begin evolving multicellularity around a billion years ago. The green algae start doing it around then too. Fungi, probably anywhere between a billion and half a billion years ago. Plants, we know that pretty well, that’s about 450 million years ago. Animals, they really start to take off around 600 million years ago. Again, it’s really hard to put an accurate date on that, so we have to be, sort of you know, hedgy. And then the brown algae — the most complex kelp — they actually only began evolving in multicellularity around 400 million years ago. We want to understand how initially dumb clumps of cells… can evolve into increasingly complex multicellular organisms, with new morphologies, cell-level integration, division of labor, and differentiation. And you know, I think we should not think of it as one process, but something where there are ecological niches available for multicellular forms, and there has to be a benefit to forming groups and evolving large size. That benefit has to be fairly prolonged. And most of the time, there isn’t, but occasionally there will be an opportunity for a lineage to begin exploring that ecology and not be inhibited by something else that’s already in that space. That might be why something like animals has only evolved once, because once you already have an animal, then it suppresses any other innovation to that space, like a first-mover advantage. STROGATZ: So, what are the benefits and what are the things that would inhibit you from that transition? RATCLIFF: Yeah. So, John Tyler Bonner is an evolutionary biologist, who worked on multicellularity decades ago, and he has this quote that I really like, that there’s always room one step up on the size scale, right? So, you know, the ecology of single-celled organisms, that’s a niche that’s been battled over for billions of years. And there’s lots of ways to make a living in that space and that’s why we are in a world of microbes. But, once you start forming multicellular groups, you can participate in a whole new ecology of larger size. You might be immune to the predators that were eating you previously, or maybe you’re able to overgrow competitors for a resource like light. If you imagine that you’re, you know, an algae growing on a rock in a stream, single-celled algae will get the light but, hey, if something can form groups, now they’re intercepting that resource before it gets to you. They win, right? Or, you know, groups also have advantages when it comes to motility and even division of labor and trading resources between cells. So, there’s many different reasons to become multicellular. And there isn’t just one reason why a lineage would evolve multicellularity. But what you need for this transition to occur is those reasons have to be there, and that benefit has to persist long enough that the lineage sort of stabilizes in a multicellular state and doesn’t just go back to being single-celled or die out. You can imagine there’s lots of ephemeral reasons to become multicellular, and then they go away, and then the single-celled competitors just win again, right? STROGATZ: That is very fascinating. I actually took biology with John Tyler Bonner. RATCLIFF: That’s really cool. STROGATZ: He was a very sweet man too. And you know what else, he had a lot of interest in physics, and I was a math and physics student, and this teacher, Professor Bonner, started talking about scaling laws as creatures get bigger, how does their metabolism scale with their body mass and things like that. And it was suddenly there was all this math in biology class, so I felt at home. But I’m bringing it up, not just to tell my own story, but because I get the feeling you’re some kind of math, physics, computer-ish kind of person. Is this true? RATCLIFF: No, I came to biology early and I came to computation and theory and physics late. But you’re right that we use all of those different approaches. My longest running collaborators are with a physicist at Georgia Tech, Peter Junker, and a mathematician in Sweden, named Eric Libby, who is a theorist, and I’ve been working with both of them for 10 to 15 years. All of my students, you know, basically work at the interface of theory computation experiments. I guess that’s the space that we inhabit. We also throw synthetic biology into that pot, which is one of the beautiful things about working with yeast. STROGATZ: Wow. Let’s go into yeast now, I think it’s time. You’ve probably said it already but, what is the big idea underlying research you’ve been doing now for some years? RATCLIFF: Big picture, we want to understand how initially dumb clumps of cells, cells that are one or two mutations away from being single-celled, don’t really know that they’re organisms — they don’t have any adaptations to being multicellular, they’re just a dumb clump — how those dumb clumps of cells can evolve into increasingly complex multicellular organisms, with new morphologies, with cell-level integration, division of labor, and differentiation amongst the cells. Just like, we want to watch that process of how do these simple groups become complex. And this is, like, one of the biggest knowledge gaps in evolutionary biology. I mean, in my opinion. But it’s something where, you know, we can use the comparative record. We know multicellularities evolved dozens of times, and the only truly long-term evolution experiments we’ll have access to are these ones that happened on Earth over the last hundreds of millions or billions of years. But because they’re so old, and because those early progenitors, those early transitional steps, aren’t really preserved, we don’t really know the process through which simple groups evolve into increasingly complex organisms. So, what we’re doing in the lab is, we are evolving new multicellular life using in-laboratory directed evolution over multi-10,000 generation timescales, to watch how our initially simple groups of cells — dumb clumps of cells — figure out some of these fundamental challenges. How do you build a tough body? How do you overcome diffusion limitation when you, after you’ve built a tough body and made a big group? How do you start to divide labor amongst yourselves when you only have one genome? How can you make that one genome be used for different purposes in different cells to underpin new behaviors at the multicellular level? Does this thing become entrenched in a multicellular state which prevents it from ever going back, or at least going back easily, to being single-celled? And so, we’re watching this stuff occur with a long-term evolution experiment, which, we’re now on generation 9,000 of what we call the Multicellularity Long-Term Evolution Experiment… M.U.L.T.E.E… MuL-TEE… absolutely a pun. It’s also named in homage of the long-term evolution experiment, which is a 70,000 and counting generation experiment with single-celled E. coli, started by Rich Lenski and now run by Jeff Barrick. So, we’re basically trying to do something similar, but in the context of understanding how multicellular organisms evolve from scratch. How they can, sort of, co-opt basic physics and bootstrap their way to becoming organisms. STROGATZ: Beautiful. That’s great. That is incredibly ambitious. I mean, I hope the listeners get a feeling of the courage it takes. And I’m sure your critics would say hubris or you’re playing God or, you know, but still, this is a wild idea to try to make multicellularity happen in the lab. So maybe you should tell us — you said directed evolution. That’s a little bit of an unclear phrase unless you’re a professional. So, what are you doing to encourage this transition? RATCLIFF: Yeah. So, you know, we start out with a single-celled yeast. We did some preliminary experiments where we evolved them in an environment — it’s just a test tube that’s being shaken in incubator — where it’s good to grow fast, because they have access to sugar water, and the faster you eat the sugar water, the more babies you can make. And it’s, you know, scramble competition, everyone has access to the same food. And then at the end of the day, we put them through a race to the bottom of the test tube, where we just put them on the bench for initially five minutes, but as they get better and better at sinking quickly, we make that time shorter and shorter to keep the pressure on them. And here, there’s an advantage to being big, because big groups sink faster through liquid media than small groups. This is just due to, you know, surface area-to-volume scaling relationships. Bigger groups will have more, you know, gravity pulling them down relative to the friction from their surface. You take the winners of that race to the bottom, the best ones. They go to fresh media and you just, kind of, keep repeating this very simple process. If you remove oxygen, now bigger is better. The smaller ones go extinct and the bigger ones win. So, yeast have a budding mechanism, where a mother cell will pop off a baby, from one of their poles, and then they can keep dividing and adding new cells to the same cell, right? So, in our early experiments that were just open-ended, we got these simple groups forming that have this beautiful fractal geometry. We had this easy mutation — it turns out it’s just one mutation in a regulatory element of the cell — that prevents daughter cells from separating. Super simple. Every time the cells divide, they pop off a baby but remain attached. And so, you get this sort of growing fractal branching pattern. Imagine something like a coral, or maybe like a branching plant. They kind of look like that, and they end up becoming more spherical with these you know nice branches. We call our yeast snowflake yeast. And you have this life cycle where they grow until they start to have packing-induced strain, they run out of space. And now if they add more cells, they just break a branch. And so, you have this emergent life cycle where they’re growing, they’re jamming, they’re breaking branches. Those little baby snowflakes pop off. And they even have a genetic bottleneck in this life cycle, in that the base of the branch that came off is one cell. So, as mutations arise, they get segregated between groups, and every group is basically clonal. Every cell in the group has the same genome. STROGATZ: Let me pause here. There’s a lot of things going on. I want to keep track of them, see if I got you. So, first of all, the big mutation is the one that doesn’t let the daughter detach from the mother, right? RATCLIFF: That’s the key thing for forming simple groups, correct, yep. So, we figured out what this mutation was, and when we started our long-term evolution experiment, we started them with basically one genotype, so one clone, that already had this mutation engineered into it, but with replicate populations. Because what we want to understand is, how do these simple groups of cells evolve to become more complex? And I don’t want that to be confounded by the mechanism through which they form groups in the first place. So, we have actually 15 parallel evolving populations, that started out the same in the beginning, but we actually have different metabolic treatments for them. So, one of them, is taking all their sugar, and they are burning it up with aerobic respiration, using air from the environment to respire their sugar. One of them, we broke their mitochondria in the very beginning, so they don’t get to use respiration, they can only ferment, and they get a much lower energetic payoff from that. But they don’t have to worry about oxygen diffusion anymore. So, sort of a trade-off there. And then one of them can do both; it first ferments and then it respires. STROGATZ: Okay. So, when you spoke of 15 different lines, they all have the property that their daughters will stay attached. But then you say some get to use oxygen, in this advantageous way for their metabolism through respiration, others have to use fermentation. RATCLIFF: Which is how you make beer, by the way. STROGATZ: Yeah. Okay, so we have different ways. And then you said some of them, at least, don’t have to worry about oxygen diffusion. What’s the worry? What is the scary thing about oxygen diffusion? RATCLIFF: So, we thought initially, that the ones that could use oxygen would be the ones that evolved the most interesting multicellular traits. But it turns out that they’ve actually stayed very simple for almost 10,000 generations. They haven’t done that much in the last 8,950 generations. STROGATZ: They peaked early. RATCLIFF: They peaked early, and they’re only about six times bigger than the ancestor, and we don’t see any beginnings of cell differentiation. They’re just simple kind of bigger snowflakes. The anaerobic ones, they have evolved to be more than 20,000 times bigger than their ancestor. STROGATZ: What? RATCLIFF: Yes. STROGATZ: Six in one case, 20,000 in the other case? RATCLIFF: Yeah, yeah, yeah. And it turns out that this is because there’s a trade-off that’s introduced by oxygen. If you form a body, and oxygen is this valuable resource that if you get it you can grow a lot more, but it can’t diffuse very far into the organism, then all of a sudden, the bigger you are, the smaller a proportion of your cells are able to access this really valuable resource, and your growth rate just falls off a cliff. STROGATZ: Oh, wow, your interior is so small compared to your surface. RATCLIFF: Exactly. The bigger you are, the larger your radius is, the smaller a proportion of your biomass has access to oxygen. And so, in our case, the anaerobic line, they’ve done the interesting things because they’re not being constrained by oxygen. They’ve evolved large size. They’ve evolved all these interesting behaviors. And they’re solving all these fundamental multicellular problems. STROGATZ: If I’m hearing you right, you’re saying something like that the anaerobic ones, because they don’t get this a sugar high from the availability of oxygen early on, they have to be resourceful. They have to come up with all kinds of other innovations, and they do. RATCLIFF: So yeah, I like the way you phrased that, but to be just a little bit more precise with our system. STROGATZ: Yeah, please. RATCLIFF: The ones that have access to oxygen, as they get bigger and bigger, their slower and slower growth rates really push back against them, and kind of act in the opposite direction of any benefits that come from size. But if you remove oxygen, now bigger is better. The smaller ones go extinct and the bigger ones win. And then they figure out a way to get bigger. And they can really push the envelope on size and explore large size in a way that the ones with oxygen can’t, because they’re getting pushed back on by growth rate. But then as they get bigger and tougher, they actually start to have real trade-offs that are created by forming big bodies. They’re so big that now they’re struggling to bring sugar into these groups, because they’re actually becoming macroscopic. You know, they’re bigger than fruit flies now. They’re large. STROGATZ: That’s wild. RATCLIFF: Yeah. And, they also face another constraint. I mentioned that they grow and would normally break due to physical strain arising from packing problems. But they solve that, by figuring out how to make tough bodies, by making their cells long enough that they actually wrap around one another and entangle. This is now a vining procedure where, if you break one branch of a vine, you know, the ivy is still not coming off your shed. I live in Atlanta, I’m tugging ivy on trees and sheds all the time and it’s very difficult, because entanglement percolates those forces throughout the entire, you know, entangled structure. And so now, you don’t just break one bond to break apart the snowflake yeast, you have to break apart hundreds of thousands. And it becomes much, much tougher as a material. And we even understand the genetic basis of this, all the way up to the physics, it’s really cool to be able to watch mutations arising that change the properties of cells that underpin emergent multicellular changes, which natural selection can see and can act upon, and can, sort-of, drive innovation in that multicellular space. [music] LEVIN: It’s all very surprising, right? Because he’s got this hypothesis going on, on the basis of what we believe about the importance of oxygen, and we even talk about it when we’re looking for other planets and life on other planets. Will there be oxygen, and is there water? And all this stuff that we’re really so certain is what’s needed to really accelerate life and life radiating. But now, he’s amazingly saying, well maybe, maybe that’s just not the case here. You have these oxygen hogs that got stuck. STROGATZ: Oh, I love your exobiology perspective on this. I wouldn’t have thought of that. That’s so interesting. I don’t know what to make of it. To me, it sort of sounded like if you’ve got a hand tied behind your back and you’re forced to ferment, you’re gonna be resourceful. You’re gonna be like that old folk saying about whatever doesn’t kill you makes you stronger, or something like that. LEVIN: Right. Evolution, as they always remind me, is not just mutation. It’s mutation and environmental pressure. So, it’s the hostility of the environment in some sense that drives the mutation. STROGATZ: Interesting point. We will hear more from Will after the break. [music] STROGATZ: Welcome back to The Joy of Why. We’re here with Will Ratcliff and we’re discussing the evolution of multicellularity. STROGATZ: I’d like to get into a question about clusters versus organisms. What would make an organism different than a colony? And how do you know which kind of thing you’re getting through these selection experiments? RATCLIFF: It’s a great question. And it really cuts to the core of what do we mean by multicellularity. And I think a lot of confusion in my field, for the last half a century, has come down to poorly resolved questions of philosophy about what do we mean by these words, and people inadvertently speaking at cross purposes. Okay, so part of this is that the word multicellular really means three different things, and we’re not very clear with our language. It’s treated as a noun in English to say, you know, multicellularity, but it’s really an adjective which modifies different nouns. So, you could have a multicellular group. That’s just, you know, a group that contains more than one cell. You could have a multicellular Darwinian individual, and that is a multicellular group which participates in the process of evolution as an entity at the group level. So, something which reproduces, where mutations can arise which generate novelty in a multicellular trait, and which natural selection can act on and cause evolutionary change in a population of groups. That’s adaptation at the group level so that would be a multicellular Darwinian individual. And then you have multicellular organisms. And the sort of philosophical distinctions of what’s an individual and what’s an organism, there’s been a lot of work done in the last 20 years, and I’m pretty happy with the results of where that field is right now, which is that organisms are functional units. Organisms have integration of parts and work well at the organismal level with, you know, high-function minimal-conflict. And so, we are all three. We’re a group. We’re a Darwinian individual. And we’re organisms. And so, the distinction is that are, sort of, progressively higher bars for how you get to these additional steps, and they tend to occur sequentially. The first step would be forming a group. The second step would be making that group capable of Darwinian evolution. And then, as a consequence of group adaptations, you can get organisms, which would be functional integration of cells, which are now parts of the new group organism. And so, a trait that would be diagnostic of that would be cellular specialization or differentiation, especially if it comes down to reproductive specialization. People love that in evolutionary biology because if cells give up their direct reproduction, they’re no longer making offspring, that’s something which is a behavior that you really can’t ascribe to the direct fitness interests of that cell, right? So, your skin cells will never make a new Steve, right? Never. They are entrenched in the, not on the line of descent. But it’s okay, because they are helping you make you know your reproductive cells reproduce. And so, the vast majority of our cells are not directly on the line of descent, but that is a derived state. A lot of confusion in my field, for the last half a century, has come down to poorly resolved questions of philosophy. Originally, every cell made copies of itself. They were on the line of descent. Originally, simple groups don’t have this kind of reproductive specialization. But over millions of generations of multicellular adaptation, you get organisms that have, now, cellular parts, where those parts work together to allow the organism to do things that it couldn’t have done before, and an important part of that is specialization. STROGATZ: Just to make sure I get that point. What does it mean to be in the line of descent, in relation to skin cells versus what, like gonadal cells? RATCLIFF: Yeah, sperm and eggs. And this isn’t a strict requirement, right? You could have something like plants that don’t have this type of line of descent segregation. But nonetheless, you know, if you look at a tree, it makes flowers, it makes seeds, right? You have this differentiation into cells that will be the reproductive structures, and those that don’t. If you’re a wood cell, you just give up your life to make wood. Wood is basically a series of tubes. You differentiate into a tube, then you die. STROGATZ: They’re doing it for the good of the multicellular group, or something. RATCLIFF: That’s right, and it’s also for the good of their own genome. STROGATZ: And their genome, yeah. RATCLIFF: Because usually those that are on the line of descent are related to them. And that’s how you, kind of, square it. So, there’s apparent altruism at the level of the cell, but there isn’t really altruism at the level of the genome. STROGATZ: I mean, when you start talking about Darwinian adaptation at the level of the group, I hear Richard Dawkins’s British accent in my ear, drilling in that there’s no selection except at the level of the gene. And then if it were Stephen Jay Gould talking to me, he would say there’s no selection except at the level of the individual. RATCLIFF: Yes. STROGATZ: I think. I’m oversimplifying, but group selection is where people traditionally start yelling at you. RATCLIFF: That’s correct. You’re totally right, and I think there should be some sociological studies on this in evolutionary biology, because it has been much more, do you believe the consensus rather than, like, actually rigorously thinking through it. And in the last 15, 20 years, I’d say the anti-group selection sentiment, that was very powerful all the way up through the 2000s, has mostly melted away, as people have embraced more pluralistic philosophies that, like, there is sort of one evolutionary process, you can view it through different perspectives, sometimes it makes more sense to use a group selection model. And, I think if we’re thinking about individuals, in this, in the Gould sense, selection acting on the traits of individuals, for multicellular organisms those individuals are groups. STROGATZ: Of course, that’s why it’s always a little bit of a confusing distinction, right? I mean, the individual is made of other things. RATCLIFF: Yes, and people are happy to sort-of round them up to just one, but there was a point where it wasn’t just one. It was a simple group, and it wasn’t so clear that that group was an individual. Like a snowflake yeast, you can break off any cell, put it into its own flask of media, and it’ll turn back into another snowflake yeast, right? That wouldn’t happen with one of my arm cells. Now, if you go for a really long time in my experiment, that stops happening. But in the beginning, cells are just in groups as vehicles. And then over time, they gain enough adaptations, as a consequence of selection acting on the traits of groups, and really caring about the fitness of groups, that cell-level fitness, outside of the context of groups, starts to really take it on the nose. They don’t do so well as being outside of groups anymore. And you know, they’re evolving, the beginnings of division of labor, different cell states from one genome. This is unpublished work, so I want to be appropriately hedged here. But we’ve done like single-cell RNA sequencing, and we can see new cell states evolving over the five thousand-generation timescale. We go from one, sort of, putative cell type to three. And we think we know what they’re doing, like we think it is actually adaptive differentiation, as opposed to just sort of noisy chaos. STROGATZ: If this pans out, it’s saying that the cells have differentiated in their gene expression. Is that what you’re saying? RATCLIFF: Exactly, into different sort of behaviors. STROGATZ: Well, all right. So, you’re seeing these interesting transitions in your lab, you’re inducing them through the selection you’re putting on. But, to what extent do we think these multicellular transitions that you’re provoking shed any light on what happened historically in the wild? RATCLIFF: That’s a great question. I mean, actually I love that question, because it’s an important scientific question. It’s something I’ve thought a lot about, in the sense that in order for our experiments to have meaning, they need to be somewhat generalizable. Now, I think the caveat here is that there is no one answer to how multicellularity evolved. It likely evolved in very different ways, and for very different reasons, in plants and animals and mushroom-forming fungi. You know, it’s not a single thing. But nonetheless, the thing that does unite it all is this evolutionary process. You have to have group formation, those groups become units of selection, and they turn into organisms as a consequence of group adaptation. And that evolutionary process, while it might play out in different ways in different lineages, some of these things are fundamental. So that transition to individuals that become organisms, that’s universal. And size is universal, and the physical side-effects that come with size, scaling laws, challenges with diffusion, and the opportunities that come to break those trade-offs through innovations, those things are all generalizable, even if they take different paths in different lineages, because they’re all proximate creatures of their environment and their gene pool, right? And we’ve never seen those processes play out in nature. And I don’t know that we ever will, because they’re historical things that we don’t have the actual samples to see it. And one of the things that we can do is, while we’re not saying this is how multicellularity evolved in any one lineage, what we’re saying is this is how multicellularity can evolve, and this is how some of these things that, maybe looking in hindsight, you think you need really complex developmental control… oh, actually it turns out you don’t, because physics gives you all these things for free, that are kind of noisy, but they work, and you can bootstrap those into your evolutionary life cycle and build upon them, without necessarily having to evolve those traits for a reason. So, a lot of things in our experiment have turned out to be easier than we expected, and while the details may differ, I suspect that some version of these things that we’re seeing in our experiment play out in the different transitions in nature. STROGATZ: You seem to have some practice with answering that question. You have thought about that one a lot. I like that answer. RATCLIFF: Thanks. STROGATZ: Well, all right. You mentioned earlier, a scientist named Rich Lenski, who had done this very long-term evolution experiment with bacteria, and that that’s been passed on now. Do you have a Jeff Barrick lined up? You’re not quite close to retirement, yet I don’t suppose. But have you thought about this? Is this experiment going to outlive you, I guess is what I’m asking? RATCLIFF: I would hope so. But, first of all, I want to say I’d be remiss if I didn’t say that our experiment is actually run in my lab by Ozan Bozdag, who’s a research scientist with me, who started the MuLTEE as a postdoc in 2016. And it’s kept working and kept succeeding, and he’s making his career essentially running this experiment. So, like, without Ozan, I wouldn’t be here and doing this. He’s the one that, kind-of, figured out how to really make it work. I’d actually be interested in doing this a little bit differently perhaps than the way the LTE has been run, which is, I want to run the standard MuLTEE myself, but I wouldn’t mind doing like a multiverse-type thing and have collaborators or others that were interested in running their own version of the experiment. There’s no reason that it has to be one timeline. I mean, you know, we could go all Loki. STROGATZ: I see, separate universes doing the experiment. RATCLIFF: Sure, I mean, we already have kits that we send to teachers, where they can evolve their own snowflake yeast, or do experiments with predators. We’re actually making a new kit this summer for these hydrodynamic-flow behaviors that we’ve been observing that snowflake yeast actually act like volcanoes or sea sponges, pulling nutrients through their bodies and shooting them up at the center of the group, which totally overcomes diffusion limitation. But also, if scientists want to work on our system, then, I think, if we democratize this and make it a resource for the community, science benefits, right? STROGATZ: So, you’ve been very good about responding to what are some aggressive questions here. Do you ever find it discouraging? And do you ever think about, you know, I don’t need this aggravation? The criticisms that are simply dismissive are the ones that I always have found the hardest, the most frustrating. RATCLIFF: Not for a long time. I felt mostly like good vibes from the broader community for many years now. But when I was just starting out, I did have some experiences that were discouraging. Like Carl Zimmer had interviewed me for the New York Times, and then got a bunch of critiques, and then re-interviewed me and I, as a postdoc, had to like defend myself to very senior faculty that I really looked up to. And, um, that didn’t feel very good. It felt sort of, like, I wasn’t welcome in those communities where it seemed like at the time, maybe, we were just bullshitting and trying to spin a good story, and there wasn’t much substance there. That definitely affected my own approach to science, and my own thoughts on inclusion and just being really supportive of younger scientists. Anytime you critique a paper in my field, you might think you’re critiquing the senior scientists on the paper, but they usually have a graduate student or a postdoc who wrote the thing. It’s their life for years, and they’re the ones that really feel the critique, right? And so, criticism is critical for science. And I love good, rigorous, critical debate. Like, I hang out with physicists and mathematicians. In those communities, it’s a sign of respect to be direct, to ask hard questions, and to endeavor to get at the truth. And I really like that. But at the same time, I love writing why I like a paper. I love writing why I think this paper is important, and how it changes the way I think about a field. And so, when I’m reviewing papers and grants, the first thing I do is write a detailed review of why the paper is important and cool. Even if I have major concerns and questions, which I will get to, I always make time to acknowledge the importance of the work. And similarly, like, in the context of multicellularity, I’m always trying to bring new people into the field. Like, we’re pluralists, we want new people to come in, we want you to bring your systems and your ideas, there isn’t just one way of thinking about this. I think those early experiences that I had were fairly rough and made me, sort of, avoid interacting with those communities, maybe for longer than I wish I had in hindsight. STROGATZ: Do you think the harsh criticism, or at least penetrating criticism, did it sharpen you up? Do you think it improved the work? Did you write better discussion sections? Did you write more persuasive introductions? RATCLIFF: Perhaps. Well, you remember when you asked me, you know, what’s the importance of your work? And I had a polished answer, and that’s because I’ve been challenged on this enough times over the last 15 years that I had to really think hard about that, right? And certainly thinking hard about it changes the way you do your science, right? You develop the areas that you think are more general and more impactful, as opposed to just doing the next experiment. That being said, the criticisms, the sharp and penetrating criticisms I’ve always appreciated, because that makes your science better. The criticisms that are simply dismissive are the ones that I always have found the hardest, the most frustrating. Because, you know, if someone says, and I’ve gotten this a lot, “It’s cool what you do, but snowflake yeast aren’t multicellular”. I mean, then I have to question, okay, am I going to spend the next 10 minutes explaining the philosophy behind what multicellularity is? Like, there isn’t just one thing here, right? And so, it’s the sort of dismissive side of the criticism that I’ve found the least productive. Whereas like, sharp, penetrating, tough questions… I mean, we’re scientists… we kind-of like that stuff. STROGATZ: So good. Thank you, Will. I really appreciate it because, you know, you have fielded, I’ve tried my best to sort-of simulate those tough questions and give you a chance to respond them. So, maybe in the future you can just play this for some of those people. Save your breath. RATCLIFF: That’s right, that’s right. STROGATZ: Anyway, it’s been really a great pleasure talking to you. RATCLIFF: Likewise, so much. STROGATZ: Thank you very much. So, we had Will Ratcliff with us, talking about the evolution of multicellularity, and it has really been fun. Thank you. RATCLIFF: Thanks, Steve. [Interview ends] STROGATZ: What about that? Do you have any personal experiences with that, or maybe you’ve seen it with your own students? LEVIN: Oh man. I’m still a student of the subject, and even now, it really resonated in that, it can be very discouraging if someone’s dismissive. He’s exactly right. It’s okay if somebody’s, like, really critical and you’re exploring together, and you’re gonna get to the answer. If it’s right, it’s right. If it’s wrong, it’s wrong. But to be dismissive, that is something that, it’s not only hard to hear, it sort of engenders a little bit of distrust, I think. ‘Cause there’s something about that that doesn’t feel like the program, you know. STROGATZ: The person who would dismiss you? You feel like, I don’t trust that person so much anymore? LEVIN: When I hear people being dismissive, it doesn’t have to just be at me, I get a little suspicious. STROGATZ: Uh-huh, like they have another agenda about self-promotion or something else? LEVIN: Maybe, yeah. You know, something. Because aren’t we here because we’re driven by excitement and curiosity? That so emanates from him. What a great colleague to have. I wanna get a letter of review from him. I want him to review one of my papers. But what a great colleague, that’s what you want people to bring to the table. And yeah, you want people to tell you, you know, this isn’t the right direction if it really isn’t, and to explain why, and, you know, be able to navigate that. But that requires real engagement. STROGATZ: Something about his phrasing that, to be dismissed is not productive. I thought that was such an interesting operational word to use. I mean, not that it’s insulting or hurtful; it’s not productive. LEVIN: Yeah. And it could take the wind out of your sails, because then there isn’t anything to discuss. If you have something to hang onto and a point to respond to with a compelling, rational, mathematical, formal, experimental argument, whichever avenue is required, that you can keep going. STROGATZ: It doesn’t help you be a better scientist. It doesn’t help you make new discoveries, to just be dismissed like that. Well, this has been so much fun talking to you about this episode. LEVIN: Always. STROGATZ: I can’t wait to do the next one. LEVIN: Thanks for listening. If you’re enjoying The Joy of Why and you’re not already subscribed, hit the subscribe or follow button where you’re listening. You can also leave a review for the show, it helps people find this podcast. Find articles, newsletters, videos, and more at quantamagazine.org. STROGATZ: The Joy of Why is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. Funding decisions by the Simons Foundation have no influence on the selection of topics, guests, or other editorial decisions in this podcast, or in Quanta Magazine. The Joy of Why is produced by PRX productions. The production team is Caitlin Faulds, Livia Brock, Genevieve Sponsler, and Merritt Jacob. The executive producer of PRX Productions is Jocelyn Gonzalez. Edwin Ochoa is our project manager. From Quanta Magazine, Simon Frantz and Samir Patel provided editorial guidance, with support from Matt Carlstrom, Samuel Velasco, Simone Barr, and Michael Kanyongolo. Samir Patel is Quanta’s Editor in Chief. Our theme music is from APM Music. The episode art is by Peter Greenwood, and our logo is by Jaki King and Kristina Armitage. Special thanks to the Columbia Journalism School and the Cornell Broadcast Studios. I’m your host, Steve Strogatz. If you have any questions or comments for us, please email us at [email protected]. Thanks for listening.



Our Toxic Relationship with Herbicides
Source: undark    Published: 2025-03-20 00:00:00

I was handed my first bottle of herbicide in my senior year of college, during an invasive shrub removal on the University of Georgia’s campus in Athens. I had taken part in invasive plant removals for years, both throughout Athens and in my hometown in Wisconsin, but all of those removals had been done by hand, with painstaking hours spent pulling up seedlings, cutting vines off trees, and chopping down shrubs. At this removal, we were applying glyphosate (the active ingredient in Roundup) to the stumps of the shrubs we were cutting down in an attempt to fully kill them in one go. As I held the bottle of glyphosate — stained blue so that we could see where it was sprayed — a flurry of thoughts raced through my brain: “I’m here to help the environment — don’t herbicides cause harm? Is this going to hurt native plants? Where will the glyphosate end up, and what other organisms will it affect? What happens if this gets on my skin? Will I get cancer from this?” Related USGS Decides to Restore Its Pesticide Database Though I was concerned and conflicted, the reason for glyphosate was clear: Without herbicides, any work we did that day would be useless. We were removing heavenly bamboo (Nandina domestica), an invasive shrub commonly used in landscaping whose berries are poisonous to native birds. The roots were too dense to dig up, so we cut the plant down as close to the ground as possible. But, as with many other invasive plants, the stump and root-sprouting ability of Nadina meant that leaves would reappear within a year. By using herbicides, we could prevent the Nandina from resprouting and increase the chance of actually killing it. The more I work with invasive plants, both as a scientist and an avid participant in removal efforts, the more I understand why I was handed that bottle of glyphosate. Invasive plants wreak havoc on ecosystems by outcompeting native species, often forming dense stands of nothing but the invader. This leads to a reduction in native plants and the other native organisms that depend on them. Learning about herbicides has convinced me that although my initial concerns were valid, herbicides currently offer the best hope we have to control invasive plants. Herbicides need not be used in all situations: Some plants are effectively controlled through mowing, burning, or the use of natural pests. In small areas, hand pulling plants can be enough for local eradication, though this takes dedication. (I’m on year three of yanking Chinese privet regrowth in my backyard.) But in most cases, these other options aren’t effective or the invasive plant covers too much land to be controlled by labor-intensive methods. When an invasive plant grows over multiple acres — as I’ve seen with my study subject, cogongrass — herbicide application can be done by one person in mere hours, compared with the days it would take people to pull plants by hand. Using herbicides drastically reduces the amount of people and time needed to manage invasive plants, increasing our chance of making a meaningful dent in existing populations. The more I work with invasive plants, both as a scientist and an avid participant in removal efforts, the more I understand why I was handed that bottle of glyphosate. The level of herbicides used to control invasive plants is usually not enough to impact people living nearby, according to the Environmental Protection Agency, but managers applying herbicides can be exposed to much higher levels and are thus at higher risk for acute and chronic illness. If I had gotten glyphosate on my skin, it could have caused some irritation but likely would not have had any long-term effects. (Accidental ingestion, though, could have been more serious.) In regards to cancer, the EPA describes glyphosate as “not likely to be carcinogenic to humans,” but the International Agency for Research on Cancer reached the opposite conclusion and lists it as “probably carcinogenic to humans.” Some studies have linked glyphosate to an increased risk of various cancers, with a 2021 meta-analysis concluding that there is compelling evidence that glyphosate-based herbicides cause non-Hodgkins lymphoma. Herbicides don’t have a much better track record environmentally. In the same way that invasive plants choke out any competitors, many herbicides indiscriminately kill whatever plants they touch. The effects of herbicides can also spread beyond plants themselves, affecting microorganisms and animals. Landscapes created by herbicides and invasive plants are visually opposite but face the same problem: The brown fields of plants killed by herbicides and the vibrant green fields of densely growing invasive plants both lack biodiversity. This dichotomy puts people dedicated to protecting their local biodiversity in an impossible spot: Do they risk their own health and that of their local ecosystem to effectively control invasive plants using herbicides, or do they let invasive plants smother the native organisms they hold dear? Many managers feel the same way I do; we don’t want to use them, but we feel that there aren’t better options. Even after herbicides are used, invasive plants still dominate many landscapes. Without herbicides, we will be admitting defeat, and invasive plants will spread unchecked far beyond their current boundaries. Get Our Newsletter Sent Weekly Email * Δ Although herbicides are currently our best tool in this fight, we cannot and should not rely on them forever. Some invasive plants have already developed resistance, which renders current herbicides useless against them and will likely render them useless against more plants in the future. We could develop new herbicides, but they would likely continue to create issues for both human and environmental health. The solution to the problems caused by herbicides is not to turn a blind eye to the issues they cause, but to develop better solutions for invasive plant management — ideally, new management methods that are targeted at a few or single species, are safer for humans, and have a lower environmental impact. Research is currently being done to create control methods that target only one species (methods such as biological control, RNAi, and autotoxicity), which would allow us to kill only the invasive plants while leaving other organisms, including humans, unharmed. It takes more work to develop a species-specific management method for every invasive plant than it does to create one indiscriminately killing herbicide. But I believe that this is the most sustainable way for us to manage invasive plants while causing the least harm. I hope for a day when we won’t need to use herbicides, but until then I’ll use my glyphosate sparingly. Elizabeth Esser is a Ph.D. student at Mississippi State University whose research focuses on invasive plant management. She was part of the Fall 2024 cohort of the Young Voices of Science program.