Engineering – 91±ŹÁÏ News /news Wed, 22 Apr 2026 16:44:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Q&A: 91±ŹÁÏ scientists decode the logic behind cells’ mysterious protein stockpiles /news/2026/04/22/paul-wiggins-protein-overabundance-study/ Wed, 22 Apr 2026 16:44:07 +0000 /news/?p=91409 Small blue blobs line up along a graph of time
In a new study, 91±ŹÁÏ researchers explored why cells “stockpile” some proteins that are required for growth. Shown here is a series of “heat map” images that detail the abundance of a required protein over five bacterial generations — red represents more protein within the cell, while dark blue represents less. When the researchers disabled the gene necessary to make the protein, the abundance of that protein diminished in each generation (top row). The cells in the bottom row had a functioning gene, so the protein remained abundant. Photo: H. James Cho et. al/Science Advances

As far as research subjects go, it’s not always easy to find common ground with a single-celled bacterium. Yet the more studies his model bacteria, , the more he sees surprising commonalities between their behavior and our own as humans.

“It was mortifying to be stumped for so long by what appeared to be completely counterintuitive behavior only to realize that I engage in exactly the same behavior everyday,” said Wiggins, an associate professor of both physics and bioengineering at the 91±ŹÁÏ.Ìę

Scientists in use experiments and modeling to understand the global principles that govern gene expression, and protein abundance in particular. In in Science Advances, Wiggins’ team discovered that A. baylyi cells amass huge surpluses of essential proteins, rather than taking the seemingly more efficient approach of making just enough to survive. 91±ŹÁÏ News chatted with Wiggins to learn about the remarkably relatable reason for this puzzling behavior.

The cell says, “Screw it, it’s virtually free. Let’s make extra.”

Paul Wiggins91±ŹÁÏ associate professor of both physics and bioengineering

This work grew out of a mystery you and your team uncovered. Tell us about that mystery.

Paul Wiggins: Genes are the blueprints for proteins — we say they “code for proteins.” A. baylyi has a number of genes that code for proteins that we know are essential for cell growth. But we didn’t know exactly what each of these proteins do. In 2016, we were attempting to uncover these proteins’ specific functions in collaboration with the . To do this we disrupted each gene so that the cells couldn’t make any more protein — they were left with a now dwindling supply of whatever they’d previously made. Then we would watch the cells under a microscope to determine when and how cellular processes would fail.Ìę

As an example, we knocked out a gene that coded for a protein that we found was responsible for cell wall synthesis — it makes the protein-sugar chainmail that prevents the cells from rupturing, or lysing. And you can watch the video we recorded to see what happened: The cells grew and divided for a while, but then all of a sudden they inflated and just popped.

small black blobs outlined in red grow and divide and then begin to disappear
The cells, outlined in red, grow and divide until they swell and burst. Their red outlines disappear as they explode. Photo: H. James Choi, Kevin J. Cutler, Teresa W. Lo and Paul Wiggins

In that example, something strange happened. We would expect the cell walls to start to fail almost immediately after the disruption happened because every time the cells divide, the remaining protein is divided among the offspring cells, so pretty quickly there wouldn’t be enough to sustain the new cell walls. However, growth continued, one generation after another, before the cells finally failed after four rounds of division!

Why did it take so long? Gene after gene showed the same pattern. We realized that each cell must have made a ton of extra proteins — far more than it needed. So after we knocked out that essential gene, the cell was able to run on fumes for a while — and was even able to pass stores of that protein on to its offspring. That finding was initially a huge surprise. We all expected, naively, that if a cell only needed a few copies of a protein to function, it would only make a few — anything more would be a waste of resources and energy. It’d be like taking a seven-day trip and packing 30 pairs of socks. And yet, this behavior seemed to be common for lots of essential genes.Ìę

What do you think is the cause of this protein overabundance?

A portait of Paul Wiggins
Paul Wiggins Photo: 91±ŹÁÏ

PW: Baking is a good analogy. If you want to make an apple pie, you probably only buy as many apples as you need for that recipe. But you keep a large quantity of salt in your pantry. You might only need a teaspoon of salt to make any given meal, but none of us go to the store and buy salt a teaspoon at a time. Salt is so cheap and easy to store that, relative to the cost of other ingredients in your meal, it’s basically free to keep in large quantities. And critically, you don’t want to run out of salt when you’re cooking.Ìę

We demonstrated that something analogous is happening in A. baylyi cells for most of the essential genes. Only about 30% of a cell’s essential genes code for proteins that are “expensive” in that the cells need these proteins in large numbers. It would be very costly to, say, double an already large number. These are the apples in our apple pie analogy — the cell makes just enough of those proteins to get by.Ìę

The remaining 70% of essential genes, however, code for proteins that the cell does not need in large numbers. In fact, relative to that other 30%, the cell needs so few of these proteins that it’s basically free to produce a bunch of extras. Doubling the production of those proteins, say from 30 to 60 copies, is a drop in the bucket if the cell’s overall budget is three million proteins. So the cell says, “Screw it, it’s virtually free. Let’s make extra so we don’t run out.” In some cases a cell might make 10 times more protein than it will ever need.

Why is this strategy useful for the cells?

PW: This overabundance strategy is important because otherwise a cell might fail to produce enough of something critical. Protein synthesis is an imprecise process — cells sometimes make a little more or a little less of things than they’re programmed to make. Some essential proteins are made at such low numbers that any deviation from the plan could leave a cell with zero copies of that protein. This is less of a problem for essential proteins that are made in much higher numbers.Ìę

How do these findings support or challenge previous ideas about how cells function?

PW: Depending on who you talk to, this is either definitely wrong or completely obvious. On the one hand, it’s a really ingrained idea that organisms are always optimizing everything, which would naively suggest that cells should make exactly what they need — no more, no less. However, this is clearly not the case. Other studies have observed these kinds of protein surpluses in cells before, but it wasn’t appreciated quite how wide-spread this phenomenon was. Previously researchers proposed that overabundance might be a hedge against changing conditions — maybe cells are stockpiling proteins in case times get tough. We’re suggesting that it’s a hedge against the cells failing to make the right number of essential proteins.

Co-authors include , a 91±ŹÁÏ postdoctoral researcher of physics; Teresa W. Lo and , former 91±ŹÁÏ doctoral students of physics; , a 91±ŹÁÏ graduate student of physics; and , a 91±ŹÁÏ postdoctoral researcher of laboratory medicine and pathology.

This research was funded by the National Science Foundation and the National Institutes of Health.

For more information, contact Wiggins at pwiggins@uw.edu.Ìę

]]>
Tiny cameras in earbuds let users talk with AI about what they see /news/2026/04/14/cameras-in-wireless-earbuds-vuebuds/ Tue, 14 Apr 2026 14:38:00 +0000 /news/?p=91232 Two black earbuds: one with the casing removed exposing a computer chip and tiny camera.
91±ŹÁÏ researchers developed a system called VueBuds that uses tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. Here, the altered headphones are shown with the camera inserted. Photo: Kim et al./CHI ‘26

91±ŹÁÏ researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, “Hey Vue, translate this for me.” They’d then hear an AI voice say, “The visible text translates to ‘Cold Noodles’ in English.”

The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.Ìę

The team will April 14 at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona.Ìę

“We haven’t seen most people adopt smart glasses or VR headsets, in part because a lot of people don’t like wearing glasses, and they often come with , such as recording high-resolution video and processing it in the cloud,” said senior author , a 91±ŹÁÏ professor in the Paul G. Allen School of Computer Science & Engineering. “But almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process.”

Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn’t work. Also, large amounts of information can’t stream continuously over Bluetooth, so the system can’t run continuous video.Ìę

The team found that using a low-power camera — roughly the size of a grain of rice — to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.Ìę

“One big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user’s view of the world reliably?” said lead author , who completed this work as a 91±ŹÁÏ doctoral student in the Allen School.Ìę

The team found that angling each camera 5-10 degrees outward provides a 98-108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them — making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system “stitch” the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second — quick enough to feel like real-time for users — rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds’ translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system’s ability to translate and answer basic questions about objects. VueBuds achieved 83-84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can’t answer questions that involve color in the scene.Ìę

The team wants to add color to the system — color cameras require more power — and to train specialized AI models for specific use cases, such as translation.Ìę 

“This study lets us glimpse what’s possible just using a general purpose language model and our wireless earbuds with cameras,” Kim said. “But we’d like to study the system more rigorously for applications like reading a book — for people who have low vision or are blind, for instance — or translating text for travelers.” 

Co-authors include , a 91±ŹÁÏ master’s student in the Allen School, and , , , and , all 91±ŹÁÏ students in electrical and computer engineering.Ìę

For more information, contact vuebuds@cs.washington.edu.

]]>
At quantum testbed lab, researchers across the 91±ŹÁÏ probe ‘spooky’ mysteries of quantum phenomena /news/2026/04/13/qt3-quantum-computing-testbed-lab-dilution-fridge/ Mon, 13 Apr 2026 23:09:13 +0000 /news/?p=91294 Three people stand next to a complex metal tube-shaped machine
Max Parsons (left), assistant professor of electrical and computer engineering, works with undergraduate staff members Reynel Cariaga (center) and Jesus Garcia (right) at the QT3 lab. The device in the foreground is a scanning tunneling microscope that can image individual atoms within a material by scanning an extremely fine needle — just one atom thick at the tip — across the sample. Photo: Erhong Gao/91±ŹÁÏ

Even on a campus like the 91±ŹÁÏ’s — home to particle accelerators, wave tanks and countless other bespoke pieces of equipment — the machinery in the stands out. Take the dilution fridge, a large, white, cylindrical device that can cool a small chamber to one hundredth of a kelvin above absolute zero — the coldest possible temperature in the universe.Ìę

“This is the coldest fridge money can buy,” said , a 91±ŹÁÏ assistant professor of electrical and computer engineering and the former director of the lab, which goes by the nickname QT3. “When it’s running, the chamber inside this device is about 100 times colder than outer space. At that temperature, it’s much easier to study and manipulate a material’s quantum properties.”

The lab also houses a photon qubit tabletop lab: a nondescript set of boxes, lasers and lenses that can demonstrate the “spooky” — a term scientists actually use — phenomenon known as quantum entanglement, where two particles appear to communicate instantaneously with each other despite being physically apart.

Or there’s the lab’s latest acquisition, the scanning tunneling microscope, which can image individual atoms within a solid material, allowing researchers to study the structure of materials at the smallest scales.

An interdisciplinary group of researchers has been marshalling resources and expertise to create QT3 for three years, and now, the lab is opening its doors as a unique one-stop shop resource for quantum researchers and educators at the 91±ŹÁÏ.

“The idea of this lab is to improve access to quantum hardware,” Parsons said. “It’s rather hard to acquire equipment like this. And there are a lot of researchers that may have good ideas that they want to test, but don’t have the resources yet for their own equipment. So we’re inviting researchers, initially from across campus, but also from other universities and from industry, to come in and test their ideas. This can be a hub for quantum experts to share their ideas and collaborate.”

The lab also boasts hardware that can demonstrate known quantum principles and techniques, making it useful for students in quantum fields. In addition to the entanglement device, Parsons’ students developed a machine that can suspend charged particles — in this case, tiny grains of pollen — in midair using electric fields. Researchers use the same technique to trap single atoms and manipulate their quantum properties, making the lab’s ion-trapping machine good practice for more complex work.

Two tiny dots hover back and forth in a tube
The QT3 facility’s ion trapping lab gives students a chance to practice techniques used in quantum computing research. Here, students have suspended two tiny grains of pollen — the red dots hovering back and forth — in midair using electric fields. Photo: Robert Thomas

Some students even work at the lab through an undergraduate staffing program, and have helped install instrumentation, write code to power equipment and build parts for custom microscopes. The program provides yet another avenue for students to get hands-on experience with unusual machinery and techniques.Ìę

“Quantum mechanics is inherently counterintuitive, and that makes it a powerful teaching tool,” Parsons said. “In the QT3 lab, students will encounter systems where their everyday intuition breaks down, and they must rely on careful reasoning and experimentation instead. They learn how to debug when results don’t match expectations, how to test simple cases and how to build understanding about hardware step by step.”

The cosmically cold dilution fridge remains something of a centerpiece, even as the lab fills up with specialized equipment. The extreme environment within the device strips heat, light and other stray energy away from materials, allowing researchers to observe the peculiar quantum properties that remain. One such property is superposition, or the ability of a particle like an electron to maintain multiple mutually exclusive properties at the same time. Scientists use superposition to create a powerful, tiny piece of technology: a quantum bit, or qubit.Ìę

“Traditional computers use bits, which can only be one or zero. A qubit, on the other hand, we can make one plus zero,” Parsons said. “It’s both at the same time, and only when we measure it do we find out which one it is. We can use this unusual property to build a new class of computers that excel at tasks like communications and encryption.”

QT3 is part of a collaborative effort to solidify 91±ŹÁÏ as a leader in quantum research and applications. Most of the lab hardware was funded by a congressional earmark championed by Senator Maria Cantwell’s office. Departmental funding from across the College of Engineering and the College of Arts and Sciences helped rehab the lab space. The National Science Foundation provided seed funding for the instructional lab equipment.

a repeating hexagonal pattern of small golden blobs
An image captured by the QT3 lab’s scanning tunneling microscope reveals a lattice of individual atoms in a sample of silicon. Photo: Rajiv Giridharagopal

The 91±ŹÁÏ has also spent the past decade investing heavily in faculty with quantum expertise.

“Very few places have expertise across the full quantum stack, from materials up to algorithms,” said , a 91±ŹÁÏ professor of physics and founder of QT3. “The 91±ŹÁÏ has quantum faculty in electrical and mechanical engineering, physics, computer science, materials science and chemistry. Our faculty work on superconducting qubits, spin defects, photons, trapped ions, neutral atoms and topological qubits. Our advantage is the breadth of our investment.”

The lab is now available to researchers and students across the 91±ŹÁÏ, and private companies are encouraged to reach out about partnering. Parsons has already used the lab to teach a graduate-level class in electrical and computer engineering for students who included employees from Boeing, Microsoft and quantum computing company IonQ. The lab is hiring for a full-time manager to maintain the equipment and help users make the most of the facility.Ìę

“Here in academia, we can improve the building blocks for applied technologies like quantum computing, and then transfer those learnings to industry for further scaling,” Parsons said.

For more information, contact Parsons at mfpars@uw.edu.

]]>
Climate change may complicate avalanche risk across the Pacific Northwest /news/2026/03/23/climate-change-avalanche-risk/ Mon, 23 Mar 2026 17:07:56 +0000 /news/?p=91066 Snowy mountains with two signs in foreground. A yellow sign reads “AVALANCHE AREA”; a red and white sign reads “NO STOPPING OR STANDING NEXT Ÿ MILE”.
Warming temperatures throughout the Pacific Northwest are likely to complicate avalanche forecasting in the coming years, according to a new 91±ŹÁÏ study. Cooler inland regions such as Idaho and Western Montana may see increased risk from avalanches caused by layers of icy crusts that form when rain falls on snow and freezes. Photo: iStock

This winter was ; as a result, many snowy, alpine areas have seen bouts of winter rainfall where there would ordinarily only be snow. These unusual weather patterns have contributed to an abysmal ski season, but they can also set the stage for dangerous avalanches. At temperatures close to freezing, precipitation can fall as rain but freeze when it hits the snow, forming an icy crust. Snow that accumulates on top of that crust is unstable and prone to abrupt slides, causing an avalanche that can close down a major highway in moments, endanger backcountry skiers and more.

Avalanche experts in Western Washington know how to manage the risks associated with rain-on-snow events, but many of their counterparts in colder regions like Eastern Washington, Idaho and Montana are less familiar with these dynamics. New research from the 91±ŹÁÏ shows that as winters in these regions warm, their snowpacks may come to resemble those of maritime areas, with more rain-on-snow events, icy crusts and complex avalanche forecasting.Ìę

The findings in ARC Geophysical Research.

“This winter’s warmth is a harbinger,” said lead author , a 91±ŹÁÏ graduate student of civil and environmental engineering. “We know that temperatures will keep rising, and our work is a red flag for cooler regions of the greater Pacific Northwest, such as Idaho and Western Montana, that aren’t used to dealing with ice crusts and their resulting avalanche problems.”

A cross-section of a snow drift with a shovel in the foreground. A horizontal line is visible running through the drift about halfway up.
A cross-section of snowpack reveals a thin, darker ice layer running horizontally through the snow. Ice layers like this one form when rain falls onto snow and freezes, forming a crust. This creates a boundary within the snowpack that can cause snow to slip and trigger an avalanche. Photo: Clinton Alden

The study is part of a larger effort to understand the structure of snow as it accumulates, which has implications for weather and avalanche forecasting, wildlife dynamics and more.Ìę

“Snow scientists are pretty good at measuring snow depth and volume,” said senior author , a 91±ŹÁÏ professor of civil and environmental engineering. “We’re also pretty good at figuring out how much water you get if all that snow melts. But our models aren’t as good at representing snow structure, such as layers of different densities and crystal types that increase avalanche risks. And we really want to know how the structure of snow changes as the climate changes. That’s a tricky question that no one has tackled, particularly for rain-on-snow conditions.”

To dig into that question, the researchers studied how warming influences ice layer formation in seasonal snowpacks. First, they collected temperature and precipitation data captured by 53 monitoring stations across the Pacific Northwest for the past 25 years. They used a computer model to identify days when ice layers likely formed at each location. They then checked the model against real-world measurements at one of the locations — a station at Snoqualmie Pass — and found that the model matched the measurements with 74% accuracy.

Finally, they used the same model to simulate those same 25 winters at 2 C and 4 C warmer than they were, and looked for changes to the number of ice crusts across the region. , the Pacific Northwest is expected to warm by 2 C to 5 C by 2050 as compared to pre-2000 temperatures.

A map of the Pacific Northwest with red and blue triangles scattered across it. The red triangles point down and the blue triangles point up.
This map shows the change in number of “ice crust days” across the 53 monitoring sites during the simulated winter with 2 C warming. The Cascade sites overwhelmingly saw fewer theoretical ice crust days, whereas cooler inland regions overwhelmingly saw more. Photo: Alden et. al/ARC Geophysical Research

The results were split regionally by the Cascade mountains. In colder, inland parts of the Pacific Northwest — places like Eastern Washington, Idaho and Montana — higher temperatures created more rain-on-snow days and more avalanche-prone ice layers. Locations in the warmer, maritime Cascades saw the opposite effect: Higher temperatures created slush instead of ice, potentially reducing the avalanche risk associated with ice crusts.Ìę

The predicted snowpack changes may also impact wildlife behavior. Some foraging mammals, such as reindeer, dig down into the snow in search of food and may have a hard time breaking through an icy crust. Conversely, firm ice might provide a better running surface for animals fleeing predators. Specific regional effects will require additional study.

What’s clear now is that those who work or play in avalanche terrain in broad swaths of the Pacific Northwest — and even beyond — may need to adjust to a new set of risk factors.

“I get calls from avalanche forecasters in places like Colorado, Wyoming and Montana. They tell me they’re getting rain at 10,000 feet, which they’ve never seen before,” said co-author , the avalanche forecaster supervisor at Washington State Department of Transportation at Snoqualmie Pass, who earned his master’s in transportation and highway engineering at the 91±ŹÁÏ. “They want to know when to expect the onset of avalanches and when to expect the return to stability.” 

Alden hopes that this research will encourage further collaboration within the avalanche forecasting community.

“I’d love to see this shared with avalanche forecasters widely, both as a call to action and as a way to help them understand what their snowpack might look like in the future,” Alden said.

, the director of geospatial science at Audubon Alaska and former doctoral student of environmental and forest sciences at the 91±ŹÁÏ, is a co-author.

This research was funded by the NASA Interdisciplinary Research in Earth Science program and the 91±ŹÁÏ Program on Climate Change’s Graubard Fellowship.

For more information, contact Alden at cdalden@uw.edu.

]]>
New marine energy tech is put to the test at Harris Hydraulics Lab /news/2026/03/06/marine-energy-turbines-harris-hydraulics-uw-pnnl/ Fri, 06 Mar 2026 17:29:14 +0000 /news/?p=90849

At the 91±ŹÁÏ Harris Hydraulics Lab, an odd scene plays out. Over and over again, researchers from the 91±ŹÁÏ and the (PNNL) pass a small rubber model of a marine animal through a large tank filled with flowing water and fitted with a spinning turbine. On some runs, the model bonks against the turbine blades; on others, it receives a glancing blow or sails past undisturbed. When bonks or knicks occur, a small collision sensor on one of the turbine’s blades detects the impacts and plots the interactions in a computer program.

The researchers are repeatedly simulating something that they hope will rarely happen in the wild: a collision between marine wildlife like a seabird, seal, fish or whale — or submerged debris like logs — and an underwater turbine.Ìę

“We want to make sure we’re minimizing the chances of a collision in the first place,” said Aidan Hunt, a senior research engineer in mechanical engineering at the 91±ŹÁÏ and member of the (PMEC). “But if a collision were to occur, we want to be able to detect it, and potentially avoid it, in real time. The available evidence suggests that collisions are rare, but we’re taking a ‘trust-but-verify’ approach.”

Marine energy — power harvested from tides, waves and currents — has enormous potential as a clean, renewable resource. But more information is needed about how large, commercial installations of underwater turbines or power-generating buoys could affect marine wildlife, whether through increased noise in the environment, habitat change or direct interactions with equipment.Ìę

The marine collision experiments are part of the , a collection of projects led by PNNL to study the environmental impact of marine energy.Ìę

The work at Harris Hydraulics follows a by PNNL and the 91±ŹÁÏ Applied Physics Lab using a four-foot-tall prototype turbine installed at the entrance to Sequim Bay. In that study, researchers trained an underwater camera on the turbine for 109 days and then catalogued every instance of an animal approaching or interacting with the turbine. The camera captured more than 1,000 instances of fish, birds and seals approaching the turbine blades. There were only four collisions, and all were small fish.Ìę

“This study was a first step, but a promising one,” said co-author , a research scientist at the 91±ŹÁÏ Applied Physics Lab. “We »ćŸ±»ćČÔ’t see any endangered species in our study, and the risk of collision for seals and sea birds seemed to be quite low. We’re excited to get back out there with the camera and learn even more.”

The Sequim Bay experiment generated hours of valuable data, but that degree of intense monitoring may not be practical in large commercial installations in the future. Cheaper impact sensors, like the ones logging bath toy impacts at Harris Hydraulics, could be a solution, researchers say.Ìę 

The project is funded by the U.S. Department of Energy’s Hydropower & Hydrokinetics Office, through the Pacific Northwest National Laboratory’s Triton Initiative and the TEAMER program.

For more information, contact Hunt at ahunt94@uw.edu or Emma Cotter at emma.cotter@pnnl.gov.

]]>
Selective forest thinning in the eastern Cascades supports both snowpack and wildfire resilience /news/2026/03/03/forest-thinning-snowpack-snow-drought-wildfire-resilience/ Tue, 03 Mar 2026 13:24:55 +0000 /news/?p=90813 An aerial photo of a snowy forest with a mountain range in the background. In the foreground, several small figures stand next to a pickup truck.
91±ŹÁÏ researchers, including members of the RAPID facility, fly a drone along Cle Elum Ridge in the Eastern Cascades. The drone was equipped with a lidar sensor that helped the team build a detailed 3D map of the study area and changes to the snowpack there. Photo: Mark Stone/91±ŹÁÏ

As climate change nudges weather in the eastern Cascades in extreme and volatile directions, forest managers in the region have a lot to juggle. Hotter, drier summers are contributing to bigger and more frequent wildfires. Meanwhile, warmer winters may cause the Cascades to lose 50% of its annual snowpack over the next 70 years. Mountain snow supplies the Yakima River Basin with 75% of its water supply, making it a crucial reservoir for both nature and agriculture . Less winter snow also leads to drier and more fire-prone forests in the summer.

To encourage fire resilience, forest managers use tried-and-true tools like controlled burning and the selective felling of trees to thin out the forest. Both methods remove fuel and help return forests to historical conditions — but less is known about their impact on snowpack.

To address this knowledge gap, a team of researchers at the 91±ŹÁÏ and The Nature Conservancy (TNC) embarked on an ambitious, multiyear study of snowpack along Cle Elum Ridge, an area of the eastern Cascades in the headwaters of the Yakima River Basin. The group experimentally thinned the forest to varying degrees in a roughly 150-acre area. Then, they measured the amount and duration of snowpack during the winter of 2023 and compared it to a previous winter before the forest treatment.Ìę

The results were encouraging: Forest thinning efforts increased snowpack by 30% on north-facing slopes and by 16% on south-facing slopes. Thinning aided snowpack the most where it created a patchwork of gaps in the forest rather than a more even density; gaps of 4-16 meters in diameter seemed to be the “sweet spot” for snow.Ìę

The research points toward more refined forest management practices that can optimize for both wildfire resilience and snowpack.

in Frontiers in Forest and Global Change.

“At its core, this research shows that reducing wildfire risk and protecting water resources don’t have to be competing goals,” said lead author , a postdoctoral researcher at the University of Alaska who completed this work as a 91±ŹÁÏ doctoral student of civil and environmental engineering. “That’s genuinely good news for a place facing both growing wildfire threats and increasing water vulnerability. So much of the climate conversation focuses on loss, which makes findings like this especially meaningful.”

A figure adjusts a drone sitting on a launchpad in a snowy field.
A figure straps a camera onto a tree in a forest.
A figure in an orange vest attaches a gadget to a tripod in a snowy field.
A figure in an orange vest operates a drone that is hovering 10 feet in the air.
A figure inspects an instrument covered with snow.
Two figures measure the depth of a hole in the snow with a pole.

Predicting snowpack in forested areas, especially those at higher altitudes, hinges on understanding how much snow reaches the ground and how much lands in the forest canopy. Snow on the ground is more likely to stick around through the season, whereas snow in the trees may either melt or sublimate back into water vapor. In either case, it wouldn’t add to the reservoir of water that melts in the spring and summer.Ìę 

“Trees intercept snow and so can reduce snowpack, but trees also shade snow and so can retain snowpack,” said senior author , a 91±ŹÁÏ professor of civil and environmental engineering. “The dominant effect depends on winter temperatures, and the Cascade crest near Cle Elum is right on the border where the effect flips from trees decreasing snow to trees saving snow.” 

found that natural gaps in the forests of the eastern Cascades accumulated more snow. This, combined with other research, gave the team reason to hope for a positive connection between forest thinning and snowpack, though it wasn’t a sure thing. have found that open areas elsewhere in the Western U.S. saw reduced snowpack.

Thus, it was time for a direct — and complex — study of managed forests.

Researchers picked Cle Elum Ridge for the work, where TNC’s forest managers were planning thinning treatments to improve forest health and wildfire resiliency. The orientation of the ridge allowed them to compare north- and south-facing slopes — southern slopes in the region see more sunshine and less snow retention on average. From October 2021 to September 2022, the researchers worked with TNC’s forest managers and local contract loggers to remove trees on both slopes in a gradient, from no thinning to extensive. The team also set up time-lapse cameras at several strategic points to measure snow depth over time.

Then, they waited for snow to fall.

By March 2023, the area was close to its peak snowpack, and the team returned with staff and equipment from the 91±ŹÁÏ (RAPID). The RAPID crew flew a specialized drone that generated a detailed 3D map of the study area using a laser-mapping technology called lidar.Ìę

By comparing the new 3D map and timelapse imagery to lidar data captured before the forest treatment, the team was finally ready to calculate two things: the change to the forest structure, and its effect on the snowpack.

Three photorealistic 3D renderings of trees in a snowy forest.
Lidar renderings of three different areas of the forest studied by the team. Left: a dense, untreated forest stand. Center: a medium-density thinned stand with tree clumps and gaps. Right: a dense stand with a canopy gap. Photo: Cassie Lumbrazo and Karen Dedinsky

Across the whole study area, the team found that thinning helped the forest recover 12.3 acre-feet (or about four million gallons) of water in the form of snow per 100 acres on north-facing slopes, and 5.1 acre-feet (or about 1.5 million gallons) per 100 acres on south-facing slopes.Ìę

As expected, areas where the thinning opened gaps in the canopy were most effective at restoring snow storage that had been previously lost to environmental degradation and climate change. Gaps of 4-16 meters in diameter seemed to retain the most snow, though there were few gaps larger than 16 meters to evaluate.

One surprising result: The way forest managers thin forests doesn’t reliably create gaps. Forest managers map out their reductions using the density of trunks in an area, not canopies, as their primary measurement.

“Imagine a group of 100 people all holding umbrellas in the rain,” said co-author , director of the 91±ŹÁÏ Climate Impacts Group. “They’re standing close enough together that their umbrellas overlap, so none of the rain hits the ground. If you remove 10 of the umbrellas randomly, you’d still have plenty of coverage overall. But, if you remove 10 umbrellas that are right next to one another, you create a gap in the umbrella ‘canopy,’ and you get a 10% increase in the amount of rain that hits the ground.”

That realization adds a nuance to the findings. It’s likely that forest thinning can benefit both wildfire and snowpack resilience at the same time, but only if managers keep canopy gaps in mind.Ìę

“One thing we all learned was that snow people and tree people speak different languages,” Lumbrazo said. “Different experts look at totally different variables to help them decide whether or not to cut down a single tree. So an important goal is to get everyone speaking the same language. And I think this paper is one step towards better communication.”

A short documentary from 2023 highlights the team’s fieldwork.

Overall, the results suggest practical changes to forest management practices in the eastern Cascades. For example, managers might consider more tree-thinning on north-facing slopes, since snowpack gains may be greater there. With further research, these learnings may also extend to other regions in the Pacific Northwest.Ìę

The work could also aid collaboration between forest managers and hydrologists at a time when the region needs all the water it can get.

“As we lose snowpack, everything becomes really squeezed,” said co-author , a senior aquatic ecologist at TNC who earned her doctorate in aquatic and fishery sciences at the 91±ŹÁÏ. “We are currently in our third consecutive year of water restrictions in the Yakima River Basin, and are staring down one of the lowest snow years on record. However, our research shows that the treatments currently used for restoring fire resilient forests are compatible with the forest structure needed for supporting water security. And in a world where climate change is reducing water supplies and increasing wildfire severity, we are pleased to report that the same forest treatments can support both goals.”

Co-authors include , a former 91±ŹÁÏ graduate student of civil and environmental engineering; , a former 91±ŹÁÏ undergraduate student of atmospheric and climate science; , a data processing specialist at the 91±ŹÁÏ RAPID facility; and , director of Forest Conservation and Management at The Nature Conservancy.

This research was funded by The Washington Department of Natural Resources, The Nature Conservancy and the National Science Foundation.Ìę

For more information, contact Lundquist at jdlund@uw.edu, Dickerson-Lange at dickers@uw.edu or Howe at emily.howe@tnc.org.Ìę

]]>
DopFone app can accurately track fetal heart rate using only a smartphone /news/2026/02/26/dopfone-fetal-heart-rate-app/ Thu, 26 Feb 2026 16:58:23 +0000 /news/?p=90704
DopFone uses an off-the-shelf smartphone’s existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. Photo: Garg et al./Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Heart rate is an important sign of fetal health, yet few technologies exist to easily and inexpensively track fetal heart rates outside of doctors’ offices. This can create risks for pregnancies in low-resource regions where doctors are far away or inaccessible.Ìę

A team led by 91±ŹÁÏ researchers has created DopFone, a system that uses an off-the-shelf smartphone’s existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. In a clinical test with 23 pregnant women, DopFone estimated heart rate with an average error of 2 beats per minute, or bpm. The accepted clinical range is within 8 bpm.Ìę

The team Dec. 2 in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.Ìę

“Eventually DopFone could let people test fetal heart rate regularly, rather than relying on the intermittent tests at a doctor’s office, or not getting tested at all,” said lead author , a 91±ŹÁÏ doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Patients might then send this data to doctors so that they can better judge patients’ health when they’re not in a clinic.”

Traditional Doppler ultrasounds, the clinical standard for fetal heart rate monitoring, work by sending high-frequency sound into a person’s body and tracking how the echo changes in frequency. They’re very accurate at measuring fetal heart rate but require costly equipment and a skilled technician to operate it.

To use DopFone, a user places the phone’s microphone against their abdomen for one minute. The phone emits a subaudible 18 kilohertz tone. The team chose this low frequency because — unlike a Doppler’s high frequencies, above 2,000 kilohertz —  it sits within the range smartphone microphones can record while still traveling well through tissue. As the tone is reflected through the user’s abdomen, the fetus’s heartbeat creates small shifts in the sound.Ìę

A machine learning model then estimates the heart rate using the audio and the patient’s demographic information

The team tested DopFone in 91±ŹÁÏ Medicine’s maternal-fetal medicine division on 23 pregnant patients between 19 and 39 weeks of pregnancy. On average its readings were within 2.1 bpm of the medical Doppler ultrasound. Its accuracy was slightly diminished for patients with high body mass indexes, though those readings were still within normal limits. Because an irregular fetal heartbeat is often an emergency, DopFone was not tested on patients with irregularities.Ìę

Next, the team plans to gather more data outside a lab to better train the model. Eventually they want to deploy it as a publicly available app.

“This women’s health space is often overlooked,” Garg said. “So I want to focus on accessible alternatives that can be available to people in low resource areas, whether that’s here in the U.S. or in other countries. Because health belongs to everyone.”

Co-authors include , a 91±ŹÁÏ graduate student in electrical and computer engineering; and , both OB/GYNs in 91±ŹÁÏ Medicine’s  maternal-fetal medicine division; and , a 91±ŹÁÏ assistant professor in the Allen School. , a 91±ŹÁÏ professor in the Allen School and in electrical and computer engineering, and of the Georgia Institute of Technology, were senior authors. This research was funded by the 91±ŹÁÏ Gift Fund.Ìę

For more information, contact Garg at pgarg70@uw.edu.

]]>
In a study, AI model OpenScholar synthesizes scientific research and cites sources as accurately as human experts /news/2026/02/04/in-a-study-ai-model-openscholar-synthesizes-scientific-research-and-cites-sources-as-accurately-as-human-experts/ Wed, 04 Feb 2026 16:02:30 +0000 /news/?p=90533 A screenshot of the OpenScholar demo.
91±ŹÁÏ and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time. Above is the user-interface for a free online demo of the model.

Keeping up with the latest research is vital for scientists, but given that are published every year, that can prove difficult. Artificial intelligence systems show promise for quickly synthesizing seas of information, but they still tend to make things up, or “hallucinate.” 

For instance, when a team led by researchers at the 91±ŹÁÏ and , or Ai2, studied a recent OpenAI model, , they found it fabricated 78-90% of its research citations. And general-purpose AI models like ChatGPT often can’t access papers that were published after their training data was collected.Ìę

So the 91±ŹÁÏ and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. The team also created the first large, multi-domain for evaluating how well models can synthesize and cite scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time.Ìę

The team Feb. 4 in Nature. The project’s are publicly available and free to use.

“After we started this work, we put the demo online and quickly, we got a lot of queries, far more than we’d expected,” said senior author , a 91±ŹÁÏ associate professor in the Paul G. Allen School of Computer Science & Engineering and senior director at Ai2. “When we started looking through the responses we realized our colleagues and other scientists were actively using OpenScholar. It really speaks to the need for this sort of open-source, transparent system that can synthesize research.”

Try the

Researchers trained the model and then created a set of 45 million scientific papers for OpenScholar to pull from to ground its answers in established research. They coupled this with a technique called “,” which lets the model search for new sources, incorporate them and cite them after it’s been trained.Ìę

“Early on we experimented with using an AI model with Google’s search data, but we found it wasn’t very good on its own,” said lead author , a research scientist at Ai2 who completed this research as a 91±ŹÁÏ doctoral student in the Allen School. “It might cite some research papers that weren’t the most relevant, or cite just one paper, or pull from a blog post randomly. We realized we needed to ground this in scientific papers. We then made the system flexible so that it could incorporate emerging research through results.” 

To test their system, the team created ScholarQABench, a benchmark against which to test systems on scientific search. They gathered 3,000 queries and 250 longform answers written by experts in computer science, physics, biomedicine and neuroscience.Ìę

“AI is getting better and better at real world tasks,” Hajishirzi said. “But the big question ultimately is whether we can trust that its answers are correct.”

The team compared OpenScholar against other state-of-the-art AI models, such as OpenAI’s GPT-4o and two models from Meta. ScholarQABench automatically evaluated AI models’ answers on metrics such as their accuracy, writing quality and relevance.Ìę

OpenScholar outperformed all the systems it was tested against. The team had 16 scientists review answers from the models and compare them with human-written responses. The scientists preferred OpenScholar answers to human answers 51% of the time, but when they combined OpenScholar citation methods and pipelines with GPT-4o (a much bigger model), the scientists preferred the AI written answers to human answers 70% of the time. They picked answers from GPT-4o on its own only 32% of the time.

“Scientists see so many papers coming out every day that it’s impossible to keep up,” Asai said. “But the existing AI systems weren’t designed for scientists’ specific needs. We’ve already seen a lot of scientists using OpenScholar and because it’s open-source, others are building on this research and already improving on our results. We’re working on a followup model, , which builds on OpenScholar’s findings and performs multi-step search and information gathering to produce more comprehensive responses.” 

Other co-authors include , , , all 91±ŹÁÏ doctoral students in the Allen School; , a 91±ŹÁÏ professor emeritus in the Allen School and general manager and chief scientist at Ai2; , a 91±ŹÁÏ postdoc in the Allen School and postdoc at Ai2; , a 91±ŹÁÏ professor in the Allen School; , a 91±ŹÁÏ assistant professor in

the Allen School; Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D’Arcy, David Wadden, Matt Latzke, Jenna Sparks and Jena D. Hwang of Ai2; Wen-tau Yih of Meta; Minyang Tian, Shengyan Liu, Hao Tong and Bohao Wu of University of Illinois Urbana-Champaign; Pan Ji of University of North Carolina; Yanyu Xiong of Stanford University; and Graham Neubig of Carnegie Mellon University.

For more information, contact Asai at akaria@allenai.org and Hajishirzi at hannaneh@cs.washington.edu.

]]>
Q&A: 91±ŹÁÏ researchers create a smart glove with its own sense of touch /news/2026/01/27/smart-glove-electronic-touch-pressure-sensor-engineeering-soft-robotics/ Tue, 27 Jan 2026 21:19:51 +0000 /news/?p=90498 Two pieces of an electronic glove lie on a table.
Inside the OpenTouch Glove (right) is a grid of wires (left) that allows the glove to sense the location and degree of any pressure applied to it. Photo: 91±ŹÁÏ

Yiyue Luo’s at the 91±ŹÁÏ is full of machinery that’s oddly cozy. Here, soft and pliable sensors are sewn, knit and glued directly into clothing to give everyday garments new capabilities.Ìę

One of the lab’s newest curiosities is a nondescript gray work glove embedded with sensors that enable it to “feel” on its own. An array of small wires hidden inside the glove report the location and degree of pressure anywhere along its surface. When in use, the signals from the glove inform a realtime “heat map” of pressure that could one day help physical therapy patients track their progress, teach robots to grasp objects, and more.

The project, as it’s officially known, is led by 91±ŹÁÏ electrical and computer engineering doctoral student as part of a collaboration with the and at MIT. 91±ŹÁÏ News caught up with Murphy to learn more about the glove and its potential uses.

What inspired you to create this glove?

Devin Murphy: Our hands are arguably our greatest tools as humans. We interact with the world through our hands in so many different ways. But the nature of how we grasp and manipulate things in our environment is super nuanced and complex, and it’s hard to capture. We have very mature electronics that record sight and sound — think of the cameras and microphones in your smartphone. But there aren’t many electronic devices that record our other senses — like touch. That’s what I’ve been working to remedy with the OpenTouch Glove.

How does the glove work? What are its capabilities?

DM: There are two flexible circuit boards inside each glove that form a grid of wires across the gripping surface of the glove. We can measure pressure at any point in that mesh where two wires meet. The circuit boards connect to a little box of electronics at the user’s wrist, which processes the signals and sends them wirelessly to a laptop.

We can then generate a “heat map” image showing where force is being applied on the hand, where the hand is applying force to different objects and how much force the hand is applying.Ìę

This kind of data gives us extra nuance that a camera can’t capture. For example, if your hand is in a bag or behind an object while it’s grasping things, a camera wouldn’t be able to tell what your hand is doing, whereas this glove can follow along.

What are some potential applications for the glove?

DM: I’m particularly excited about how this technology might help patients recovering from an injury. Physical therapists have patients perform a variety of tasks to regain mobility in their hands — if we can measure how much force people apply during this process, we can provide them with concrete feedback. The patient and therapist can both track progress by monitoring grip strength of the patient over time.Ìę

We’re also seeing lots of new companies invest in physical intelligence for robotics — basically recording how robots interact with the physical world. If we can record human hand grip signals, we might be able to teach robotic hands how to mimic human behavior.Ìę

One other interesting application is in augmented reality or virtual reality. If we replaced traditional controllers with these gloves, it could give users a more natural way to interact with virtual objects and scenery — though we’d need some additional technology for users to feel pressure when gripping virtual things.

How can other researchers access this technology?

DM: It’s really important to us that the glove is accessible to other researchers and anyone else who might want to use it for their own applications. You can order all of the components of the glove directly from commercial manufacturers, and we have released all of the manufacturing files and instructions for putting the glove together yourself.Ìę

We’ve also shown some demos of the glove “in the wild” to showcase the different kinds of data it can collect, and we’re planning to release an open source data set collected with the glove in partnership with researchers at MIT.Ìę

I’m really excited about developing new wearable technologies that allow people to record less popular sensing modalities like touch. I want to figure out how we can capture the nuances of touch-based interactions, so that ultimately we can get better insights into our daily lives.

For more information, contact Murphy at devinmur@uw.edu.

]]>
Q&A: A 91±ŹÁÏ materials lab probes the mysteries of toughness at the nano scale /news/2026/01/21/lucas-meza-nanoscale-architecture-nanomaterials-mechanical-engineering/ Wed, 21 Jan 2026 17:13:20 +0000 /news/?p=90387 .wp-video { margin-top: -20px; margin-bottom: 5px; } .wp-video br { display: none; }
A splitscreen image showing a black and white webbed material on the left and a bubbled, foamy black and white material on the right.
Researchers in the Meza Research Group at the 91±ŹÁÏ draw inspiration from natural structures to develop new materials. On the left is a scanning electron microscope (SEM) image of naturally occurring spider silk. On the right is an SEM image of an engineered plastic material with a similar structure. The plastic is foamed using tiny carbon dioxide bubbles to make it lighter and tougher. Photo: Haynl et. al/Nature Scientific Reports (left) and Dwivedi et. al/Journal of the Mechanics and Physics of Solids (right).

UPDATE (Feb. 17, 2026): This story has been updated to note Meza’s work with the NSF I-Corps program and CoMotion Innovation Gap Fund.

Biology is full of architecture. Materials like wood, crab shells and bone all contain microscopic structures such as layers, lattices, cells and interwoven fibers. Those structures give natural materials an ideal combination of lightness and toughness, and they’ve inspired engineers to build artificial materials with similar properties. But how those tiny architectures lead to such tough materials is something of a mystery.

In 2019, , assistant professor of mechanical engineering, set up the at the 91±ŹÁÏ to tease out the mechanical secrets of structures that are as small as 100 nanometers, which is about the size of a virus. He arrived with an ambitious plan to build a new generation of nanomaterials, but soon discovered that the field was missing a fundamental understanding of toughness at tiny scales.

“We had to go back to basics,” Meza said.Ìę

In the years since, Meza and his team have flipped the script on nanomaterial toughness. They’re applying what they’ve learned to new kinds of bespoke materials, though along the way they’re still surprised by tiny structures behaving in ways they theoretically shouldn’t.

Meza spoke with 91±ŹÁÏ News about his strange and surprising journey into the nano realm.

What questions did you establish your lab to tackle?

Lucas Meza: Very broadly, we’re trying to design better materials, but not by introducing new material chemistries. Instead, we use architecture. This is something humans have done throughout history — think of woven textiles and fabrics, or straw-reinforced mud bricks. These are “architected materials,” where the structure of materials allows us to control useful properties like strength, toughness and flexibility.Ìę

The thing that I was particularly interested in was introducing architecture at the nanoscale. What if, instead of building a wall with bricks, we could use nanoplatelets? Or instead of making fabrics with yarn, we could use nanofibers? How would those properties change?

Engineers have found that nanomaterials are stronger, more flaw resistant and more deformable. The challenge is: How do you actually do something with them? We need to build them into large-scale materials in a way that preserves their unique nanoscale properties.Ìę

What material properties are you most interested in?

LM: We’re using architecture to tinker with a few interrelated properties. The first is a material’s strength, which is how much stress it can take before it permanently deforms. The second is ductility, which is how much a material can stretch before it breaks. Those two features sort of combine to determine a material’s toughness, which is the total amount of energy you have to put into a material to break it.

To give a couple of opposing examples: A ceramic plate is strong, meaning it can take a lot of stress, but it has very low ductility, meaning it barely deforms before breaking. So overall, it’s not a very tough material. Conversely, a rubber band is not strong at all — you can bend and stretch it with very little stress. But, it’s extremely ductile — it can stretch to many times its original dimensions without snapping. So as a result, rubber is very tough.

Credit: 91±ŹÁÏ (left) and Envato (right).

Toughness is a particularly interesting property to study because there’s no limit on how tough a material can be. There are very hard limits on how strong and how stiff a material can be, and you can use architecture to optimize them, but you can’t exceed the properties of the base material. On the other hand, you can use architecture to improve the overall toughness of a material.Ìę

Nature has already created a lot of really interesting micro- and nano-structures. Every natural material has to be porous to transport nutrients, and on top of that we see things like lattices in some bone and in sea sponges; shells all have layered architectures; wood and bone are fiber composites; and all of this happens at the micro- and nanoscale.Ìę

There had to be a reason that nature was making these architectural motifs at the micro and nanoscale, and I had a strong hunch that it had to do with toughness.Ìę

What has your lab learned about toughness at the small scale?

LM: Initially, we learned a surprising amount about what we »ćŸ±»ćČÔ’t know. My thought in getting into this work was that people know enough about fracture mechanics — how things break and why — so we can just dive into making really complicated architectures and studying their toughness, like l made by my former doctoral student, . We realized the scientific community has some big gaps in their understanding of fracture toughness. So instead, we had to go simple — basically we pulled and pushed and broke a lot of small things to understand what gives a material ductility and toughness.

We learned that all material behavior centers around something called a “plastic zone size.” Basically, when you pull on a part that has a crack, a little ball of energy builds up right at the tip of that crack. That energy ball grows as you add more stress, and at a certain point it shoots through the sample and causes a break. The size of the ball at its breaking point is the material’s plastic zone size, and it’s different for every material.Ìę

We realized that what makes a material ductile or not . If a material is smaller than its plastic zone size, that ball of energy can’t grow big enough to cause the crack to grow, so instead it spreads outward and the material bends.Ìę

The four material samples in this video are all the same size, but structural differences at the nanoscale produce different levels of ductility. In each example, the cyan color represents the sample’s plastic zone size. In less ductile samples, the cyan-colored area remains small and the material snaps, whereas in more ductile samples, the cyan area spreads out and the material stretches. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids

This is the key for how to use architecture to cheat and get more ductility out of a material. If you take a brittle material and make a nanoscale lattice or foam out of it, . The new tougher “architected material” can also have a larger plastic zone size, sometimes as much as 100 times larger, meaning it is likely to be ductile as well. This is why things like fabrics and meshes can be really hard to tear.Ìę

How are you applying what you’re learning to real-world materials?

LM: We’re building lots of our material architectures painstakingly at the small scale using resources like the and the 91±ŹÁÏ . That “bottom-up” approach — building things one nanofeature at a time — gives us lots of control over the building blocks we’re playing with, but it’s a real challenge to scale.

The “top-down” approach, where you let physics and kinetics just self-assemble things for you, is much easier. One example is “solid state foaming”, a technique my colleague has been working on for decades. Basically, you take a thermoplastic material — something that melts when you heat it up — throw it in a chamber with high pressure carbon dioxide so it saturates the sample, then heat it up so that dissolved gas forms tiny bubbles in the material. With this process we have less control over the precise architecture — it’s a random foam — but by controlling the amount of dissolved gas we can easily control the size of the bubbles. Those materials turned out to be super tough! My doctoral student has , where we show they could even be tougher than the material they were made from. This goes against everything we knew about normal foam fracture processes.Ìę

A black and white image showing a dense, webbed material.
A black and white image showing a dense, webbed material.
A black and white image showing a dense, webbed material.

A plastic nanofoam material created by Kush Dwivedi, a doctoral student in Meza’s lab, seen at 2,500x, 12,000x and 35,000x magnifications. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids.

I’m currently pursuing an earlier-stage commercialization effort to use tiny foams as a filtration material for biomedical applications. We can make nanoporous filter materials — think of the reverse osmosis system that might be under your sink — but we can do it without using any of the harsh chemical processes that are currently used. We’ve been able to explore this avenue thanks to our participation in the program, which then enabled us to get a award.

I also recently got an NSF CAREER grant to study fracture in architected materials, and we’re exploring ways to make tougher sustainable and biodegradable materials. Think of the last time you used a biodegradable fork that broke off in your food. Materials like wood are actually great alternatives for this, but we’re trying to figure out how to do it without cutting down a tree or harvesting bamboo.Ìę

For more information contact Meza at lmeza@uw.edu.

]]>