Engineering and Technology Updates
Rechargeable cement-based batteries
Imagine an entire twenty storey concrete building which can store energy like a giant battery. Thanks to unique research from Chalmers University of Technology, Sweden, such a vision could someday be a reality. Researchers from the Department of Architecture and Civil Engineering recently published an article outlining a new concept for rechargeable batteries — made of cement.
The ever-growing need for sustainable building materials poses great challenges for researchers. Doctor Emma Zhang, formerly of Chalmers University of Technology, Sweden, joined Professor Luping Tang’s research group several years ago to search for the building materials of the future. Together they have now succeeded in developing a world-first concept for a rechargeable cement-based battery. The concept involves first a cement-based mixture, with small amounts of short carbon fibres added to increase the conductivity and flexural toughness. Then, embedded within the mixture is a metal-coated carbon fibre mesh — iron for the anode, and nickel for the cathode. After much experimentation, this is the prototype which the researchers now present. Luping Tang and Emma Zhang’s research has produced a rechargeable cement-based battery with an average energy density of 7 Watthours per square metre (or 0.8 Watthours per litre). Energy density is used to express the capacity of the battery, and a modest estimate is that the performance of the new Chalmers battery could be more than ten times that of earlier attempts at concrete batteries. The energy density is still low in comparison to commercial batteries, but this limitation could be overcome thanks to the huge volume at which the battery could be constructed when used in buildings. The fact that the battery is rechargeable is its most important quality, and the possibilities for utilisation if the concept is further developed and commercialized are almost staggering. Energy storage is an obvious possibility, monitoring is another. The researchers see applications that could range from powering LEDs, providing 4G connections in remote areas, or cathodic protection against corrosion in concrete infrastructure. “It could also be coupled with solar cell panels for example, to provide electricity and become the energy source for monitoring systems in highways or bridges, where sensors operated by a concrete battery could detect cracking or corrosion,” suggests Emma Zhang. The concept of using structures and buildings in this way could be revolutionary, because it would offer an alternative solution to the energy crisis, by providing a large volume of energy storage. Concrete, which is formed by mixing cement with other ingredients, is the world’s most commonly used building material. From a sustainability perspective, it is far from ideal, but the potential to add functionality to it could offer a new dimension. The idea is still at a very early stage. The technical questions remaining to be solved before commercialisation of the technique can be a reality include extending the service life of the battery, and the development of recycling techniques. “Since concrete infrastructure is usually built to last fifty or even a hundred years, the batteries would need to be refined to match this, or to be easier to exchange and recycle when their service life is over. For now, this offers a major challenge from a technical point of view,” says Emma Zhang. But the researchers are hopeful that their innovation has a lot to offer. “We are convinced this concept makes for a great contribution to allowing future building materials to have additional functions such as renewable energy sources,” concludes Luping Tang.
Interactive typeface for digital text
Scientists develop adaptive font that speeds up reading. AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen. How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web. The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others — for example, visually halfway between Helvetica and Times New Roman. Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed. In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favourite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.
Robot That Senses Hidden Objects – “We’re Trying to Give Robots Superhuman Perception”
In recent years, robots have gained artificial vision, touch, and even smell. The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfilment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit. As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls. But radio waves can. For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader. The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception. RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced. The robot initiates the seek-and-pluck process by pinging the target object’s RF tag for a sense of its whereabouts. The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren’s source. With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot’s decision making. RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to “declutter” its environment — removing packing materials and other obstacles in its way in order to access the target. RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item’s identity without the need to manipulate the item, expose its barcode, then scan it.
New technology converts waste plastics to jet fuel in an hour
Washington State University researchers have developed an innovative way to convert plastics to ingredients for jet fuel and other valuable products, making it easier and more cost effective to reuse plastics. The researchers in their reaction were able to convert 90% of plastic to jet fuel and other valuable hydrocarbon products within an hour at moderate temperatures and to easily fine-tune the process to create the products that they want. In recent decades, the accumulation of waste plastics has caused an environmental crisis, polluting oceans and pristine environments around the world. As they degrade, tiny pieces of microplastics have been found to enter the food chain and become a potential, if unknown, threat to human health. Plastics recycling, however, has been problematic. The most common mechanical recycling methods melt the plastic and re-mold it, but that lowers its economic value and quality for use in other products. Chemical recycling can produce higher quality products, but it has required high reaction temperatures and a long processing time, making it too expensive and cumbersome for industries to adopt. Because of its limitations, only about 9% of plastic in the U.S. is recycled every year. In their work, the WSU researchers developed a catalytic process to efficiently convert polyethylene to jet fuel and high-value lubricants. Polyethylene, also known as #1 plastic, is the most commonly used plastic, used in a huge variety of products from plastics bags, plastic milk jugs and shampoo bottles to corrosion-resistant piping, wood-plastic composite lumber and plastic furniture. For the process, the researchers used a ruthenium on carbon catalyst and a commonly used solvent. They were able to convert about 90% of the plastic to jet fuel components or other hydrocarbon products within an hour at a temperature of 220 degrees Celsius, which is more efficient and lower than temperatures that would be typically used. Jia was surprised to see just how well the solvent and catalyst worked. “Before the experiment, we only speculated but didn’t know if it would work,” he said. “The result was so good.” Adjusting processing conditions, such as the temperature, time or amount of catalyst used, provided the critically important step of being able to fine-tune the process to create desirable products, Lin said. “Depending on the market, they can tune to what product they want to generate,” he said. “They have flexibility. The application of this efficient process may provide a promising approach for selectively producing high-value products from waste polyethylene.”
Smaller chips open door to new RFID applications
Researchers at North Carolina State University have made what is believed to be the smallest state-of-the-art RFID chip, which should drive down the cost of RFID tags. In addition, the chip’s design makes it possible to embed RFID tags into high value chips, such as computer chips, boosting supply chain security for high-end technologies. “As far as we can tell, it’s the world’s smallest Gen2-compatible RFID chip,” says Paul Franzon, corresponding author of a paper on the work and Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at NC State. Gen2 RFID chips are state of the art and are already in widespread use. One of the things that sets these new RFID chips apart is their size. They measure 125 micrometers (μm) by 245 μm. Manufacturers were able to make smaller RFID chips using earlier technologies, but Franzon and his collaborators have not been able to identify smaller RFID chips that are compatible with the current Gen2 technology. “The size of an RFID tag is largely determined by the size of its antenna — not the RFID chip,” Franzon says. “But the chip is the expensive part.” The smaller the chip, the more chips you can get from a single silicon wafer. And the more chips you can get from the silicon wafer, the less expensive they are. “In practical terms, this means that we can manufacture RFID tags for less than one cent each if we’re manufacturing them in volume,” Franzon says. That makes it more feasible for manufacturers, distributors or retailers to use RFID tags to track lower-cost items. For example, the tags could be used to track all of the products in a grocery store without requiring employees to scan items individually. “Another advantage is that the design of the circuits we used here is compatible with a wide range of semiconductor technologies, such as those used in conventional computer chips,” says Kirti Bhanushali, a researcher. “This makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle. This could help to reduce counterfeiting, and allow you to verify that a component is what it says it is.” “We’ve demonstrated what is possible, and we know that these chips can be made using existing manufacturing technologies,” Franzon says. “We’re now interested in working with industry partners to explore commercializing the chip in two ways: creating low-cost RFID at scale for use in sectors such as grocery stores; and embedding RFID tags into computer chips in order to secure high-value supply chains.”
Engineers harvest WiFi signals to power small electronics
With the rise of the digital age, the amount of WiFi sources to transmit information wirelessly between devices has grown exponentially. This results in the widespread use of the 2.4GHz radio frequency that WiFi uses, with excess signals available to be tapped for alternative uses. To harness this under-utilised source of energy, a research team from the National University of Singapore (NUS) and Japan’s Tohoku University (TU) has developed a technology that uses tiny smart devices known as spin-torque oscillators (STOs) to harvest and convert wireless radio frequencies into energy to power small electronics. In their study, the researchers had successfully harvested energy using WiFi-band signals to power a light-emitting diode (LED) wirelessly, and without using any battery. “We are surrounded by WiFi signals, but when we are not using them to access the Internet, they are inactive, and this is a huge waste. Our latest result is a step towards turning readily-available 2.4GHz radio waves into a green source of energy, hence reducing the need for batteries to power electronics that we use regularly. In this way, small electric gadgets and sensors can be powered wirelessly by using radio frequency waves as part of the Internet of Things. With the advent of smart homes and cities, our work could give rise to energy-efficient applications in communication, computing, and neuromorphic systems,” said Professor Yang Hyunsoo from the NUS Department of Electrical and Computer Engineering, who spearheaded the project. Spin-torque oscillators are a class of emerging devices that generate microwaves, and have applications in wireless communication systems. However, the application of STOs is hindered due to a low output power and broad linewidth. While mutual synchronisation of multiple STOs is a way to overcome this problem, current schemes, such as short-range magnetic coupling between multiple STOs, have spatial restrictions. On the other hand, long-range electrical synchronisation using vortex oscillators is limited in frequency responses of only a few hundred MHz. It also requires dedicated current sources for the individual STOs, which can complicate the overall on-chip implementation. To overcome the spatial and low frequency limitations, the research team came up with an array in which eight STOs are connected in series. Using this array, the 2.4 GHz electromagnetic radio waves that WiFi uses was converted into a direct voltage signal, which was then transmitted to a capacitor to light up a 1.6-volt LED. When the capacitor was charged for five seconds, it was able to light up the same LED for one minute after the wireless power was switched off. In their study, the researchers also highlighted the importance of electrical topology for designing on-chip STO systems, and compared the series design with the parallel one. They found that the parallel configuration is more useful for wireless transmission due to better time-domain stability, spectral noise behaviour, and control over impedance mismatch. On the other hand, series connections have an advantage for energy harvesting due to the additive effect of the diode-voltage from STOs. Commenting on the significance of their results, Dr Raghav Sharma, a lead researcher, shared, “Aside from coming up with an STO array for wireless transmission and energy harvesting, our work also demonstrated control over the synchronising state of coupled STOs using injection locking from an external radio-frequency source. These results are important for prospective applications of synchronised STOs, such as fast-speed neuromorphic computing.”
NASA’s Ingenuity Mars Helicopter succeeds in historic first flight
NASA’s Ingenuity Mars Helicopter became the first aircraft in history to make a powered, controlled flight on another planet. The Ingenuity team at the agency’s Jet Propulsion Laboratory in Southern California confirmed the flight succeeded after receiving data from the helicopter via NASA’s Perseverance Mars rover. The solar-powered helicopter first became airborne at 3:34 a.m. EDT (12:34 a.m. PDT) — 12:33 Local Mean Solar Time (Mars time) — a time the Ingenuity team determined would have optimal energy and flight conditions. Altimeter data indicate Ingenuity climbed to its prescribed maximum altitude of 10 feet (3 meters) and maintained a stable hover for 30 seconds. It then descended, touching back down on the surface of Mars after logging a total of 39.1 seconds of flight. Additional details on the test are expected in upcoming downlinks. Ingenuity’s initial flight demonstration was autonomous — piloted by onboard guidance, navigation, and control systems running algorithms developed by the team at JPL. Because data must be sent to and returned from the Red Planet over hundreds of millions of miles using orbiting satellites and NASA’s Deep Space Network, Ingenuity cannot be flown with a joystick, and its flight was not observable from Earth in real time. NASA Associate Administrator for Science Thomas Zurbuchen announced the name for the Martian airfield on which the flight took place. Ingenuity’s chief pilot, Håvard Grip, announced that the International Civil Aviation Organization (ICAO) — the United Nations’ civil aviation agency — presented NASA and the Federal Aviation Administration with official ICAO designator IGY, call-sign INGENUITY. As one of NASA’s technology demonstration projects, the 19.3-inch-tall (49-centimeter-tall) Ingenuity Mars Helicopter contains no science instruments inside its tissue-box-size fuselage. Instead, the 4-pound (1.8-kg) rotorcraft is intended to demonstrate whether future exploration of the Red Planet could include an aerial perspective. This first flight was full of unknowns. The Red Planet has a significantly lower gravity — one-third that of Earth’s — and an extremely thin atmosphere with only 1% the pressure at the surface compared to our planet. This means there are relatively few air molecules with which Ingenuity’s two 4-foot-wide (1.2-meter-wide) rotor blades can interact to achieve flight. The helicopter contains unique components, as well as off-the-shelf-commercial parts — many from the smartphone industry — that were tested in deep space for the first time with this mission. Parked about 211 feet (64.3 meters) away at Van Zyl Overlook during Ingenuity’s historic first flight, the Perseverance rover not only acted as a communications relay between the helicopter and Earth, but also chronicled the flight operations with its cameras. The pictures from the rover’s Mastcam-Z and Navcam imagers will provide additional data on the helicopter’s flight. Perseverance touched down with Ingenuity attached to its belly on Feb. 18. Deployed to the surface of Jezero Crater on April 3, Ingenuity is currently on the 16th sol, or Martian day, of its 30-sol (31-Earth day) flight test window. Over the next three sols, the helicopter team will receive and analyze all data and imagery from the test and formulate a plan for the second experimental test flight, scheduled for no earlier than April 22. If the helicopter survives the second flight test, the Ingenuity team will consider how best to expand the flight profile.
Less than a nanometer thick, stronger and more versatile than steel
Scientists create stable nanosheets containing boron and hydrogen atoms with potential applications in nanoelectronics and quantum information technology.What’s thinner than thin? One answer is two-dimensional materials — exotic materials of science with length and width but only one or two atoms in thickness. They offer the possibility of unprecedented boosts in device performance for electronic devices, solar cells, batteries and medical equipment. In collaboration with Northwestern University and the University of Florida, scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory report in Science magazine a breakthrough involving a 2D material called borophane, a sheet of boron and hydrogen a mere two atoms in thickness. One of the most exciting developments in materials science in recent decades has been a 2D sheet of carbon (graphene), which is one atom thick and 200 times stronger than steel. A similarly promising and newer material is an atom-thick sheet of boron, called borophene — with an “e.” A multi-institutional team, including researchers in Argonne’s Center for Nanoscale Materials (a DOE Office of Science User Facility), first synthesized borophene in 2015. While graphene is simply one atomic layer out of the many same layers in the common material graphite, borophene has no equivalent parent structure and is very difficult to prepare. What’s more, the rapid reaction of borophene with air means it is very unstable and changes form readily.”Borophene by itself has all kinds of problems,” said Mark Hersam, Professor of Materials Science and Engineering at Northwestern University. “But when we mix borophene with hydrogen, the product suddenly becomes much more stable and attractive for use in the burgeoning fields of nanoelectronics and quantum information technology.” The research team grew borophene on a silver substrate then exposed it to hydrogen to form the borophane. They then unraveled the complex structure of borophane by combining a scanning tunneling microscope with a computer-vision based algorithm that compares theoretical simulations of structures with experimental measurements. Computer vision is a branch of artificial intelligence that trains high performance computers to interpret and understand the visual world. Even though the borophane material is only two atoms thick, its structure is quite complex because of the many possible arrangements for the boron and hydrogen atoms. “We have tackled a significant challenge in determining the atomic structures from scanning tunneling microscopy images and computational modeling at the atomic scale with the help of computer vision,” said Argonne’s Maria Chan, nanoscientist at the Center for Nanoscale Materials. Given the success in unraveling this complex structure, the team’s automated analytical technique should be applicable in identifying other complex nanostructures in the future. “What is really encouraging from our results is that we found a borophane nanosheet on a silver substrate to be quite stable, unlike borophene,” said Pierre Darancet, nanoscientist at Argonne’s Center for Nanoscale Materials. “This means it should be easily integrated with other materials in the construction of new devices for optoelectronics, devices combining light with electronics.” Such light-controlling and light-emitting devices could be incorporated into telecommunications, medical equipment and more.
Wake steering potentially boosts energy production at US wind plants
Wake steering is a strategy employed at wind power plants involving misaligning upstream turbines with the wind direction to deflect wakes away from downstream turbines, which consequently increases the net production of wind power at a plant. Researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) illustrate how wake steering can increase energy production for a large sampling of commercial land-based U.S. wind power plants. While some plants showed less potential for wake steering due to unfavourable meteorological conditions or turbine layout, several wind power plants were ideal candidates that could benefit greatly from wake steering control. Overall, a predicted average annual production gain of 0.8% was found for the set of wind plants investigated. In addition, the researchers found that on wind plants using wake steering, wind turbines could be placed more closely together, increasing the amount of power produced in a given area by nearly 70% while maintaining the same cost of energy generation. “We were surprised to see that that there was still a large amount of variability in the potential energy improvement from wake steering, even after accounting for the wake losses of different wind plants,” said author David Bensason. Just as umbrellas may cast a shadow, wind turbines create a region of slower, more turbulent air flow downstream of their rotor, which is known as a wake. When these wakes flow into another turbine, they reduce its power production capacity. The wake steering strategy “steers” these wakes away from turbines by offsetting the angle between the rotor face and the incoming wind direction. This technique sacrifices the power efficiencies of a few turbines in order to increase the performance of the wind power plant as a whole. Wake steering can only increase energy production if there are wake losses to start. Consequently, the benefits of wake steering tend to increase for wind plants with higher wake losses. The study is one of the first to use the Gauss-Curl-Hybrid wake model, which NREL developed. This model predicts wake behaviour in a wind plant more accurately than prior models and captures effects that are more prominent in large-scale plants. The researchers also combined several publicly available databases and tools that together make the investigation of wake steering potential for a large sample of U.S. wind plants possible.
Microfluidic Chip Simplifies COVID-19 Testing, Delivers Results on a Phone in 55 Minutes or Less
COVID-19 can be diagnosed in 55 minutes or less with the help of programmed magnetic nanobeads and a diagnostic tool that plugs into an off-the-shelf cellphone, according to Rice University engineers. The Rice lab of mechanical engineer Peter Lillehoj has developed a stamp-sized microfluidic chip that measures the concentration of SARS-CoV-2 nucleocapsid (N) protein in blood serum from a standard finger prick. The nanobeads bind to SARS-CoV-2 N protein, a biomarker for COVID-19, in the chip and transport it to an electrochemical sensor that detects minute amounts of the biomarker. The researchers argued their process simplifies sample handling compared to swab-based PCR tests that are widely used to diagnose COVID-19 and need to be analyzed in a laboratory. A system developed by Rice University engineers employs a stamp-sized microfluidic chip that measures the concentration of SARS-CoV-2 nucleocapsid protein in blood serum to diagnose COVID-19 in less than an hour. The system uses an off-the-shelf cellphone and potentiostat to deliver the results. Lillehoj Research Group/Rice University Lillehoj and Rice graduate student and lead author Jiran Li took advantage of existing biosensing tools and combined them with their own experience in developing simple diagnostics, like a microneedle patch introduced last year to diagnose malaria. The new tool relies on a slightly more complex detection scheme but delivers accurate, quantitative results in a short amount of time. To test the device, the lab relied on donated serum samples from people who were healthy and others who were COVID-19-positive. Lillehoj said a longer incubation yields more accurate results when using whole serum. The lab found that 55 minutes was an optimum amount of time for the microchip to sense SARS-CoV-2 N protein at concentrations as low as 50 picograms (billionths of a gram) per milliliter in whole serum. The microchip could detect N protein in even lower concentrations, at 10 picograms per milliliter, in only 25 minutes by diluting the serum fivefold. Paired with a Google Pixel 2 phone and a plug-in potentiostat, it was able to deliver a positive diagnosis with a concentration as low as 230 picograms for whole serum. Rice University mechanical engineer Peter Lillehoj, left, and graduate student Jiran Li developed a system that uses programmable magnetic nanobeads, an off-the-shelf cellphone, and a plug-in diagnostic tool to diagnose COVID-19 in 55 minutes or less. A capillary tube is used to deliver the sample to the chip, which is then placed on a magnet that pulls the beads toward an electrochemical sensor coated with capture antibodies. The beads bind to the capture antibodies and generate a current proportional to the concentration of biomarker in the sample. The potentiostat reads that current and sends a signal to its phone app. If there are no COVID-19 biomarkers, the beads do not bind to the sensor and get washed away inside the chip. Lillehoj said it would not be difficult for industry to manufacture the microfluidic chips or to adapt them to new COVID-19 strains if and when that becomes necessary.