Image credit: NASA. (PhysOrg.com) — Scientists in Texas say if a medium-sized asteroid were to crash into the ocean the ozone layer could be depleted, allowing high levels of ultraviolet radiation to reach the surface. Citation: Asteroid strike into ocean could deplete ozone layer (2010, October 27) retrieved 18 August 2019 from https://phys.org/news/2010-10-asteroid-ocean-deplete-ozone-layer.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: E. Pierazzo et al., Ozone perturbation from medium-size asteroid impacts in the ocean, Earth and Planetary Science Letters, Article in Press, doi:10.1016/j.epsl.2010.08.036 © 2010 PhysOrg.com Dr. Elisabetta Pierazzo and colleagues from the Planetary Science Institute in Tucson ran computer simulations that revealed if an asteroid 500 m to 1 km in diameter were to hit the Pacific Ocean it would eject enough water vapor and sea salt high enough into the atmosphere to affect the protective ozone layer.The results of the simulations showed the 1 km asteroid could affect an area over 1000 km in diameter, and vast quantities of water and vapor would be ejected up to 160 km high. The scientists say the water vapor would contain chlorine and bromine from the vaporized sea salts, and this would result in significant global depletion of the ozone layer by destroying it faster than it is created naturally. Pierazzo said such an asteroid would produce “an ozone hole that will engulf the entire Earth,” and produce a huge spike in ultraviolet (UV) radiation with levels higher than anywhere on the surface today.The simulations showed the smaller asteroid, 500 meters across, could produce ultraviolet index (UVI) levels of 20 or over in the northern tropics for a period of several months, and the global ozone depletion would be similar to the record ozone holes seen over the Antarctic in the mid 1990s. The 1 km asteroid could produce a spike of 56, and levels over 20 for about two years in both the northern and southern hemispheres. The UVI is a measure of UV intensity, with levels over 10 assumed dangerous. The highest recorded UVI known in recent times has been 20.Pierazzo said previous studies of the effects of asteroid impacts on the ocean have concentrated on tsunamis, but her research found the effects of a medium-sized asteroid strike would also include difficulty in growing crops and would have a long-term negative effect on global food production. She said if there was enough warning of an impending strike farmers could plant crops with high UV-tolerance and food could be stored to ensure supplies during the period of low productivity.Other effects would include increased rates of skin cancer and cataracts. People may also have to avoid exposure to direct sunlight to avoid rapid sunburn. A UVI level of 56 has never been experienced, and so the effects are uncertain, but it is likely that people would have to remain indoors during daylight to avoid serious sunburn. The study said over 100 asteroids 1 to 2 km in diameter are thought to be orbiting in paths that could bring them near to Earth, and many more smaller asteroids appear to be “currently looming undiscovered in the Earth’s neighborhood.” NASA estimates there are around 800 such Near Earth Objects (NEOs). The authors say past research suggests on average an asteroid 500 meters wide or less hits the Earth about once every 200,000 years, while a larger asteroid strike happens around once every 800,000 years.The research covered only the impact of an asteroid hitting the ocean, since such strikes are twice as likely to occur as land impacts. The results are published in the journal Earth and Planetary Science Letters. Explore further Ozone: Climate change boosts ultraviolet risk for high latitudes
Citation: Transistor performance improves due to quantum confinement effects (2011, March 21) retrieved 18 August 2019 from https://phys.org/news/2011-03-transistor-due-quantum-confinement-effects.html The team of researchers, Krutarth Trivedi, Hyungsang Yuk, Herman Carlo Floresca, Moon J. Kim, and Walter Hu, from the University of Texas at Dallas, has published their study in a recent issue of Nano Letters.In their study, the researchers lithographically fabricated silicon nanowires with diameters of just 3-5 nanometers. With a diameter this small, the nanowires experience quantum confinement effects that cause the nanowires’ properties to change from their bulk values. Specifically, transistors made with the thin nanowires have improved hole mobility, drive current, and current density – properties that make the transistors operate more quickly and efficiently. The transistors’ performance even surpasses recently reported silicon nanowire transistors that use doping to improve their performance.“The significance of this research is that we have demonstrated that increasing the degree of quantum confinement of the silicon channel results in increasing the carrier mobility,” Hu told PhysOrg.com. “We provide experimental proof of the theoretically simulated high hole mobility of about 3-nm-diameter nanowires.”At first, it may seem counterintuitive that a smaller wire can have a higher mobility than a larger wire. But as the researchers explain, quantum confinement effects increase carrier mobility in the wire by confining the holes (which contribute to the current) to a more uniform range of energy than they have in bulk silicon. Whereas in bulk silicon, holes having a broad energy distribution contribute to the current, in the tiny nanowires, the energy of the holes has a much narrower distribution. Having holes with similar energy, and therefore mass, reduces carrier scattering effects in the nanowires, which in turn improves mobility and current density. By comparing the performance of tiny nanowires to similarly fabricated nanobelts, in which only the thickness dimension is confined, the researchers also show that increasing the degree of quantum confinement of the channel results in higher carrier mobility. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further More information: Krutarth Trivedi, et al. “Quantum Confinement Induced Performance Enhancement in Sub-5-nm Lithographic Si Nanowire Transistors.” Nano Letters. DOI:10.1021/nl103278a As the researchers note, fabricating the high-performance sub-5-nanometer silicon nanowire transistors is relatively simple compared to other methods of nanowire fabrication, which use bottom-up methods and doped junctions or channel doping. One application that the researchers plan to pursue is using the nanowires to make inexpensive, ultrasensitive biosensors, since biosensor sensitivity increases as nanowire diameter decreases. “As required by our funding (NSF Career Award), our immediate plan is to explore biosensing of protein with these types of tiny nanowire transistors,” Hu said. “We believe such small-diameter nanowires with intrinsic high performance can have a major impact on biosensing, as they are expected to provide ultimate sensitivity down to a single molecule with a better signal-to-noise ratio.”In addition to biosensing, the new high-performance transistors could have an impact on CMOS scaling, which is becoming increasingly difficult. The researchers are currently looking for funding in order to explore this area.“These transistors can have an impact on CMOS scaling due to the fact that performance actually increases with decreasing diameter,” Hu said. “Arrays of nanowire transistors with tiny nanowires could be made to achieve high performance without requiring new processing techniques. In fact, the processing can even be simplified over current techniques, as our nanowire transistors do not use highly doped complementary junctions for source/drain; eliminating high doped junctions alleviates many of the current issues in scaling down CMOS processing techniques to the nanoscale.“At large, my personal viewpoint is that silicon still has a lot of potential for nanoelectronics, and the industry may want to consider supporting research in silicon nanowire or quantum wire devices and new architectures to fully unleash the potential of silicon. Everyone is researching graphene, which is a great material of course, but we may not want to ignore the potential of silicon, as we show that effective hole mobility can be over 1200.” A cross-sectional view of a 5.1-nm nanowire, taken with a high-resolution transmission electron microscope. The scale bar is 5 nm. Image credit: Krutarth Trivedi, et al. ©2011 American Chemical Society. Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Nanowires made of ‘strained silicon’ show how to keep increases in computer power coming (PhysOrg.com) — Manufacturing on the nanoscale has come a long way since Feynman’s visions of nanotechnology more than 50 years ago. Since then, studies have demonstrated how low-dimensional structures, such as nanowires and quantum dots, have unique properties that can improve the performance of a variety of devices. In the latest study in this area, researchers have fabricated transistors made with exceptionally thin silicon nanowires that exhibit high performance due to quantum confinement effects in the nanowires.
Effect of replacing crops and grasses with high-emitting SRC species in our biofuel cultivation scenario. Credit: Nature Climate Change (2013) doi:10.1038/nclimate1788 Journal information: Nature Climate Change More information: Impacts of biofuel cultivation on mortality and crop yields, Nature Climate Change (2013) doi:10.1038/nclimate1788AbstractGround-level ozone is a priority air pollutant, causing ~ 22,000 excess deaths per year in Europe, significant reductions in crop yields and loss of biodiversity. It is produced in the troposphere through photochemical reactions involving oxides of nitrogen (NOx) and volatile organic compounds (VOCs). The biosphere is the main source of VOCs, with an estimated 1,150 TgC yr−1 (~ 90% of total VOC emissions) released from vegetation globally4. Isoprene (2-methyl-1,3-butadiene) is the most significant biogenic VOC in terms of mass (around 500 TgC yr−1) and chemical reactivity and plays an important role in the mediation of ground-level ozone concentrations5. Concerns about climate change and energy security are driving an aggressive expansion of bioenergy crop production and many of these plant species emit more isoprene than the traditional crops they are replacing. Here we quantify the increases in isoprene emission rates caused by cultivation of 72 Mha of biofuel crops in Europe. We then estimate the resultant changes in ground-level ozone concentrations and the impacts on human mortality and crop yields that these could cause. Our study highlights the need to consider more than simple carbon budgets when considering the cultivation of biofuel feedstock crops for greenhouse-gas mitigation. (Phys.org)—A trio of researchers from the Lancaster Environment Centre, in the UK has found that planting trees for use as a biofuel source, near populated areas, is likely to increase human deaths due to inhalation of ozone. The team, in their paper published in the journal Nature Climate Change, suggests that increased levels of isoprene emitted from such trees, when interacting with other air pollutants can lead to increased levels of ozone in the air which might also lead to lower crop yields. Isoprene research could lead to eco-friendly car tires To reduce the amount of carbon dioxide being released into the atmosphere due to the burning of fossil fuels, governments and private groups have turned to biofuels as an alternate source. In Europe, fast growing trees such as eucalyptus, willow and poplar, have been planted and are being used to create biofuels which can be burned in engines and generators. Such trees have been seen as an attractive alternative to other edible crops, such as corn, because growing them doesn’t impact the price of food. Now however, this new research suggests that there is a different price to pay when using trees to produce biofuels.The problem, the researchers say, is that the types of trees used to produce biofuels emit high levels of the chemical isoprene into the air. Prior research has shown that when isoprene mixes with other pollutants (such as nitric oxides), ozone is produced. In this new research, the team suggests that using such trees as a biofuel could result in up to 1,400 deaths per year in Europe – per the European Union’s 2020 tree planting goal – attributable to increased amounts of ozone in the air, along with $7.1 billion in additional health care costs and crop losses.Plans for using trees as a biofuel resource generally involve planting near large urban areas to avoid incurring transportation costs. Such plantings, the researchers suggest, would lead to lung problems and deaths for people living in those areas. Conversely, if large numbers of such trees were planted in rural areas, edible crops would be adversely impacted, leading to less production and higher costs.The team also notes that ozone is currently blamed (by the European Environment Agency) for the deaths of 22,000 people in Europe each year. Citation: Research shows isoprene from biofuel plants likely to lead to ozone deaths (2013, January 7) retrieved 18 August 2019 from https://phys.org/news/2013-01-isoprene-biofuel-ozone-deaths.html © 2013 Phys.org Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Journal information: Proceedings of the National Academy of Sciences (Phys.org)—Researchers working out of the Hopkins Marine Station at Stanford University have found that the ability of some corals to withstand higher water temperatures appears to be gene based. In their paper published in the Proceedings of the National Academy of Sciences, the group outline how they compared two types of corals commonly found in a reef in American Samoa, and found that those that are more heat resistant tend to express more gene types during normal temperature conditions. © 2013 Phys.org More information: Genomic basis for coral resilience to climate change, PNAS, Published online before print January 7, 2013, doi: 10.1073/pnas.1210224110AbstractRecent advances in DNA-sequencing technologies now allow for in-depth characterization of the genomic stress responses of many organisms beyond model taxa. They are especially appropriate for organisms such as reef-building corals, for which dramatic declines in abundance are expected to worsen as anthropogenic climate change intensifies. Different corals differ substantially in physiological resilience to environmental stress, but the molecular mechanisms behind enhanced coral resilience remain unclear. Here, we compare transcriptome-wide gene expression (via RNA-Seq using Illumina sequencing) among conspecific thermally sensitive and thermally resilient corals to identify the molecular pathways contributing to coral resilience. Under simulated bleaching stress, sensitive and resilient corals change expression of hundreds of genes, but the resilient corals had higher expression under control conditions across 60 of these genes. These “frontloaded” transcripts were less up-regulated in resilient corals during heat stress and included thermal tolerance genes such as heat shock proteins and antioxidant enzymes, as well as a broad array of genes involved in apoptosis regulation, tumor suppression, innate immune response, and cell adhesion. We propose that constitutive frontloading enables an individual to maintain physiological resilience during frequently encountered environmental stress, an idea that has strong parallels in model systems such as yeast. Our study provides broad insight into the fundamental cellular processes responsible for enhanced stress tolerances that may enable some organisms to better persist into the future in an era of global climate change.Press release Stanford marine biologists search for the world’s strongest coral Citation: Researchers use DNA sequencing to learn why some corals are more heat tolerant (2013, January 8) retrieved 18 August 2019 from https://phys.org/news/2013-01-dna-sequencing-corals-tolerant.html Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Scientists the world over agree that global atmospheric temperatures are rising and along with them are rising ocean temperatures. Because of this, those who study the seas and the organisms that live in them have been scrambling to try to find tools to help discern which will be impacted by rising temperatures and which won’t. Doing so might help direct conservation efforts.In this new research, the researchers looked to reefs off Ofu Island in American Samoa for help. Because of unique physical characteristics, pools of water exist at different temperatures and some experience extreme temperature variations on a daily basis. In studying the corals that live there, the team discovered that some of the corals did poorly as temperatures rose, while others seemed to thrive – this despite being very closely related. Suspecting that an answer might lie in their gene pool, the researchers collected some samples and took them back to their lab.The samples were put in water tanks and subjected to variable water temperature fluctuations. Each was also tested for gene expression via RNA testing using a technique known as Illumina sequencing. In studying the results of their analysis, the researchers found that all of the corals expressed hundreds of genes when exposed to higher than normal temperatures. But they also found that those that were more heat resistant also expressed approximately 60 of those same genes during normal temperature exposure as well – the team refers to this as “front-loading.” They suggest that the corals give themselves the upper hand in dealing with environmental changes by expressing more genes during normal times that might help in dealing with whatever changes may come about.The team suggests their findings may help with future conservation efforts as ocean temperatures continue to rise. By focusing resources on those most likely to survive, the hope is that more sea creatures can be saved in the long run. Pillar coral, Dendrogyra cylindricus. Image: NOAA
This shows a computer simulation of the absorption in five nanowires. The sunlight comes in from the top. The dark red areas, near the top, have the strongest absorption, while the dark blue areas have the weakest absorption. The simulation was done in three dimensions, but the figure shows a cross-section. Credit: Wallentin et al. Researchers the world over are on a quest to create a cheaper alternative to silicon based solar cells, some of which have been focused on using indium phosphate because it is more efficient at turning sunlight into electricity – unfortunately, it’s not very good at absorbing sunlight. In this new research, the team turned to nanowire technology to help it do a better job. This shows the photovoltaic efficiency of millimeter square InP single band gap nanowire solar cells as a function of time measured within the FP7 financed AMON-RA project. About four million InP NWs contribute to the signal. The line is a guideline for the eye. This figure gives an overview of the development of NWPV within AMON-RA. A comparison with the record efficiency development of other types of solar cells can be made using this plot by NREL: nrel.gov/ncpv/images/efficiency_chart.jpg. Credit: Wallentin et al. This is a SEM image of indium phosphide (InP) nanowires after growth, shown at a 30-degree angle. The nanowires are about 1.5 microns long and 0.18 microns in diameter, with a center to center distance of 0.47 microns (1 micron (µm) is equal to 1/1000 of a millimeter, that is, one millionth of a meter). This can be compared with the sunlight, which has most of its energy in a wavelength range from 0.5 to a few microns. The nanowires cover 12% of the surface as seen from top, that is, from the sun’s point of view. On top of the nanowire is the gold particle which is used as a seed for the crystal growth. Credit: Wallentin et al. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. USC team develops promising polymer for solar cells In addition to being nearly as efficient as traditional silicon based solar cells, this new type of cell can also be bent to allow for shaping into flexible panels that allow for more options when mounting. It also allows for a smaller overall surface area. The team suggests that solar cells made using this approach might be best used in concentrated systems using lenses, though it’s not clear as yet whether they would stand up to the intense heat. There’s also the problem of creating the cells on a scale large enough for them to be sold commercially at a reasonable cost. More information: 1. InP Nanowire Array Solar Cells Achieving 13.8% Efficiency by Exceeding the Ray Optics Limit, Science, DOI: 10.1126/science.1230969ABSTRACTPhotovoltaics based on nanowire arrays could reduce cost and materials consumption compared to planar devices, but have exhibited low efficiency of light absorption and carrier collection. We fabricated a variety of millimeter-sized arrays of p-i-n doped InP nanowires and found that the nanowire diameter and the length of the top n-segment were critical for cell performance. Efficiencies up to 13.8% (comparable to the record planar InP cell) were achieved using resonant light trapping in 180-nanometer-diameter nanowires that only covered 12% of the surface. The share of sunlight converted into photocurrent (71%) was six times the limit in a simple ray optics description. Furthermore, the highest open circuit voltage of 0.906 volt exceeds that of its planar counterpart, despite about 30 times higher surface-to-volume ratio of the nanowire cell.2. Performance of Nanowire Solar Cells on the Rise, DOI: 10.1126/science.339.6117.263 . www.sciencemag.org/content/339/6117/263.summaryPress release This shows an optical microscope image of four nanowire solar cells. Each cell is a slightly lighter shade of purple in color, while the darker areas in between are inactive. The yellow areas are gold metal pads, which are used for connecting the solar cells to an external load. Each cell contains about 4.5 million nanowires. Credit: Wallentin et al. This shows an optical microscope image of four nanowire solar cells. Each cell is a slightly lighter shade of purple in color, while the darker areas in between are inactive. The yellow areas are gold metal pads, which are used for connecting the solar cells to an external load. Each cell contains about 4.5 million nanowires. Credit: Wallentin et al. © 2013 Phys.org Explore further The idea is to create a small forest of wires standing on end atop a platform, with each wire just 1.5 micrometers tall and with a diameter of 180 nanometers. The bottom part of each wire is doped to cause an excess positive charge, the top doped to give it an excess negative charge with the middle remaining neutral – all standing on a bed of silicon dioxide. The team caused such a setup to come about by dropping gold flakes on a silicon bed and adding silicon phosphate to cause wires to grow which were kept clean and straight via etching using hydrochloric acid. The result is a photovoltaic cell capable of converting 13.8 percent of incoming sunlight into electricity while absorbing 71 percent of the light above the band gap. This scanning electron microscopy (SEM) image shows a side-view of nanowires which have been coated with a transparent and conductive oxide. The sunlight comes in from top, which is why the top contact must be transparent for light. The substrate is used for the bottom contact. Credit: Wallentin et al. (Phys.org)—Robert F. Service has published a News & Analysis piece in the journal Science describing the progress being made in nanowire photovoltaics. One of those innovations is described in another paper published in the same journal by a team working on indium phosphate nanowire technology. In their paper, they describe how in creating micrometer sized wires they have managed to build a non-silicon based solar cell that is capable of converting almost 14 percent of incoming sunlight into electric current. Citation: Researchers devise method to create efficient indium phosphate nanowire photovoltaics (2013, January 18) retrieved 18 August 2019 from https://phys.org/news/2013-01-method-efficient-indium-phosphate-nanowire.html This shows a computer simulation of the absorption in five nanowires. The sunlight comes in from the top. The dark red areas, near the top, have the strongest absorption, while the dark blue areas have the weakest absorption. The simulation was done in three dimensions, but the figure shows a cross-section. Credit: Wallentin et al. Journal information: Science
© 2013 Phys.org Explore further More information: training.linuxfoundation.org/w … -for-linux-migration Citation: International Space Station making laptop migration from Windows XP to Debian 6 (2013, May 12) retrieved 18 August 2019 from https://phys.org/news/2013-05-international-space-station-laptop-migration.html Although Linux machines, like Windows, are not malware-proof, the fact that Linux is an open source operating system means that a community overseeing a Linux distribution can issue quick notices and quick patches. Debian’s site claims that mails sent over to the mailing lists get answers in 15 minutes or less and by the people who developed it. They also note that their bug tracking system is open and encourages users to submit their bug reports; users are notified when the bug was closed. “We don’t try to hide the fact that software doesn’t always work the way users want,” according to the Debian site.An incident in 2008 apparently made space-station personnel more aware than ever of a computer virus’ ability to disrupt operations in the absence of support from an open source community. That was the year the station computers were infected by the Gammina.AG. Virus after an astronaut brought an infected USB or flash drive into orbit. The virus infected other computers on board.Chuvala and NASA selected Debian, a system that uses Linux or the FreeBSD kernel. Debian can run on almost all personal computers. Ubuntu, which is a popular Linux-based operating system, said on its site that “Debian is the rock upon which Ubuntu is built.” Debian began in August 1993 by Ian Murdock, as a new distribution to be made openly in the spirit of Linux and GNU. The ISS adopted Debian 6. The Linux Foundation stepped in to assist with tailored training in the form of two courses, Introduction to Linux for Developers and Developing Applications For Linux. The courses prepared them for developing apps related specifically to the needs of the ISS. (Phys.org) —The International Space Station has decided to switch dozens of laptops running Windows XP over to Debian. What Linux fans have been saying for years—that Linux delivers greater stability and reliability for public and private computing environments—resonated with Keith Chuvala, the United Space Alliance contractor manager involved in the switch. The change at the International Space Station is all about the replacement of dozens of laptops with XP being switched over to Debian 6. Chuvala said, “We needed an operating system that was stable and reliable – one that would give us in-house control. So if we needed to patch, adjust or adapt, we could.” The Year of OpenSolaris This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Examples of taking small actions to prevent a large catastrophe are numerous—workers at ski resorts commonly use dynamite to cause small landslides in the hopes of avoiding larger ones, is just one example. The problem with such efforts is that there is no scientific or mathematical basis for such actions. How do ski operators know that they are reducing risk; worse, how do they know that they’re not making things worse? Sadly, the current model is to use past experience, hunches, and sometimes prayer. In this new effort, the research team sought to create a computer model that could help control a complex system in a reproducible way. It’s all based on what’s known as the theory of self-organized criticality (SOC).The team began by reproducing the efforts of prior researchers where simulated grains of sand were used to create simulated mounds, along with simulated avalanches when critical points were reached. To better simulate the real world, the researchers created several mounds, all close enough to one another to be impacted by the others should they crumble. In so doing, the researchers studied the most important sand grain of all—the one that causes a mound to topple. Prior research had shown that the mounds organized themselves into critical states that could be described by power-law avalanche size distribution, which is what SOC describes. In this new model, the researchers assigned a variable to the probability of a given mound cascading if one more grain were added to it, then sought to control that variable. They found that by increasing or decreasing its value, cascades could be both initiated and avoided. By running the model with different values, the team found they were better able to study the dynamics of the entire system.The most interesting result they found was that sometimes if they tried too hard to suppress large avalanches by causing smaller ones, they were inadvertently increasing the chances of a large one happening anyway.While the researchers’ model is interesting, there is no clear evidence that suggests real-world events transpire as clearly as can be defined by a computer simulation. For that reason, much more work will have to be done before it’s known whether the new model might be used to help control real-world systems. Explore further Citation: Researchers develop model to help control cascading events (2013, August 19) retrieved 18 August 2019 from https://phys.org/news/2013-08-cascading-events.html Journal information: Physical Review Letters © 2013 Phys.org (Phys.org) —A team of researchers at the University of California has developed a model that might lead to a better way to control natural cascading events such as landslides, earthquakes, or even neural networks. In their paper published in the journal Physical Review Letters, the team describes how they expanded on prior research using simulated sand piles to develop statistical models for controlling complex cascading events. Schematic representation of the sandpile model used by Noël et al. When one grain falls on top of a pile with three grains (a), the pile becomes unstable and topples. While toppling, all grains in the pile are evenly distributed among the four neighbors (b). The toppling cascades further as one neighboring pile becomes unstable (four grains) and also topples (c). Credit: APS/Alan Stonebraker Researchers discover midair collisions enhance the strength of sandstorms More information: Controlling Self-Organizing Dynamics on Networks Using Models that Self-Organize, Phys. Rev. Lett. 111, 078701 (2013)AbstractControlling self-organizing systems is challenging because the system responds to the controller. Here, we develop a model that captures the essential self-organizing mechanisms of Bak-Tang-Wiesenfeld (BTW) sandpiles on networks, a self-organized critical (SOC) system. This model enables studying a simple control scheme that determines the frequency of cascades and that shapes systemic risk. We show that optimal strategies exist for generic cost functions and that controlling a subcritical system may drive it to criticality. This approach could enable controlling other self-organizing systems. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (Phys.org)—A team of researchers from China and the U.S. has devised a relatively simple means for measuring the shear forces that exist between sheets of graphene and other materials. In their paper published in the journal Physical Review Letters, the group describes the technique and the results they found when using it to measure shear forces for several types of 2-D materials. More information: Guorui Wang et al. Measuring Interlayer Shear Stress in Bilayer Graphene, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.119.036101ABSTRACTMonolayer two-dimensional (2D) crystals exhibit a host of intriguing properties, but the most exciting applications may come from stacking them into multilayer structures. Interlayer and interfacial shear interactions could play a crucial role in the performance and reliability of these applications, but little is known about the key parameters controlling shear deformation across the layers and interfaces between 2D materials. Herein, we report the first measurement of the interlayer shear stress of bilayer graphene based on pressurized microscale bubble loading devices. We demonstrate continuous growth of an interlayer shear zone outside the bubble edge and extract an interlayer shear stress of 40 kPa based on a membrane analysis for bilayer graphene bubbles. Meanwhile, a much higher interfacial shear stress of 1.64 MPa was determined for monolayer graphene on a silicon oxide substrate. Our results not only provide insights into the interfacial shear responses of the thinnest structures possible, but also establish an experimental method for characterizing the fundamental interlayer shear properties of the emerging 2D materials for potential applications in multilayer systems. Explore further 2-D layered devices can self-assemble with precision As graphene sheets are used in more applications, it has become important to better understand the shear forces at play between them and other materials. Learning more could prevent sheets from coming apart, for example, when they are used to make an electronic device. In some applications, graphene sheets are layered with other graphene sheets, while in others, graphene sheets are applied to sheets of other materials. But until now, there was no way to measure how well such materials clung to one another. Traditionally, such measurements are made by adhering sheets of material and then pulling them in opposite directions—a means of measuring their shear force. But because graphene sheets are single atom thick, that approach is not feasible. In this new effort, the researchers have found a new way to measure shear forces between such materials.In the new method, tiny holes were drilled in one sheet before it was adhered to another sheet. Air was then used to generate pressure from below, causing the top sheet to rise and pulling the sheet below with it, forming a bubble. The researchers then used Raman spectroscopy to measure the amount of stretching at the base of the bubble as a means of measuring the shear forces between the two materials.Using their technique, the researchers were able to measure the shear force between two sheets of graphene and found it to be 40 kPa, which is considered very small and unsurprising, as sheets of graphene have been used as a lubricant. The team also measured the shear force between a sheet of graphene and a sheet of silicon dioxide, and found it to be 1.64 MPa—which is approximately 40 times greater than the shear force between graphene sheets. Journal information: Physical Review Letters © 2017 Phys.org Citation: Bubble technique used to measure shear forces between graphene sheets (2017, July 19) retrieved 18 August 2019 from https://phys.org/news/2017-07-technique-sheer-graphene-sheets.html This cutaway shows the bulging that occurs when air is pumped through a hole underneath a graphene sheet. Researchers estimated the shear resistance by measuring the stretching in the sheet around the hole. The stretching for bilayer graphene (blue) is much larger than that for monolayer graphene (yellow). Credit: L. Liu, C. Weng, & G. Wang/NCNST
Citation: Astronomers discover a ‘hot Jupiter’ orbiting a rapidly rotating star (2017, December 20) retrieved 18 August 2019 from https://phys.org/news/2017-12-astronomers-hot-jupiter-orbiting-rapidly.html KELT-21b was discovered by a group of researchers led by Marshall C. Johnson of the Ohio State University. The astronomers used the KELT-North telescope at Winer Observatory in Arizona to observe the star KELT-21 (also known as HD 332124) as part of the Kilodegree Extremely Little Telescope (KELT) project. This survey detects transiting exoplanets around bright stars.Johnson’s team identified a transit signal in the light curve of KELT-21. The planetary nature of this signal was later confirmed by a follow-up observational campaign that employed the KELT Follow-Up Network (KFUN) consisting of various ground-based observatories worldwide.”Here we present the discovery of a new transiting hot Jupiter, KELT-21b, confirmed using Doppler tomography,” the paper reads.The study reveals that KELT-21b has a radius of about 1.59 Jupiter radii, but its exact mass remains uncertain. The researchers only determined its upper mass limit to be approximately 3.91 Jupiter masses. KELT-21b orbits its star every 3.61 days at a distance of about 0.05 AU from the host. The planet has an equilibrium temperature of 2,051° K. The parameters of KELT-21b show that it is an another example of a “hot Jupiter” exoplanet. The so-called “hot Jupiters” are gas giant planets, similar in characteristics to the solar system’s biggest planet, with orbital periods of less than 10 days. They have high surface temperatures, as they orbit their host stars very closely. When it comes to the host, it is a metal-poor star of spectral type A8V, with a radius of about 1.63 solar radii and nearly 46 percent more massive than the sun. The star is about 1.6 billion years old and has an effective temperature of approximately 7,600° K.Notably, KELT-21 has a relatively very high projected rotation velocity of 146 km/s. This makes it the most rapidly rotating star known to host a transiting giant planet. KELT-21 also turns out to be one of only several known A stars to be orbited by a transiting planet.Besides discovering and characterizing KELT-21b, Johnson’s team also found two stars of about 0.12 Jupiter masses each that could be associated with KELT-21. However, they cannot confirm this finding with available data, therefore further observations are required to validate this assumption.”Our high-resolution imaging observations revealed the presence of a close pair of faint stars at a separation of 1”.2 from the planet host star. Although we cannot confirm using our current data whether they are physically associated with the KELT-21 system, we have argued statistically that they are unlikely to be background sources,” the researchers wrote in the paper.If this hypothesis is confirmed, it would mean that KELT-21b is one of only a handful of known transiting exoplanets in hierarchical triple stellar systems. More information: KELT-21b: A Hot Jupiter Transiting the Rapidly-Rotating Metal-Poor Late-A Primary of a Likely Hierarchical Triple System, arXiv:1712.03241 [astro-ph.EP] arxiv.org/abs/1712.03241 © 2017 Phys.org All follow-up transits of KELT-21b combined into one light curve (grey) and a 5 minute binned light curve (black). The red line is the combined and binned models for each transit. Credit: Johnson et al., 2017. New ‘hot Jupiter’ with short orbital period discovered Explore further An international team of astronomers has found a “hot Jupiter” exoplanet circling a rapidly rotating, metal-poor star. The newly discovered alien world, designated KELT-21b, is larger than Jupiter and orbits its host in less than four days. The finding is presented in a paper published December 8 on arXiv.org. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Economic models significantly underestimate climate change risks Citation: Using physics to make better GDP estimates (2018, July 31) retrieved 18 August 2019 from https://phys.org/news/2018-07-physics-gdp.html Credit: CC0 Public Domain This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Journal information: Nature Physics More information: A. Tacchella et al. A dynamical systems approach to gross domestic product forecasting, Nature Physics (2018). DOI: 10.1038/s41567-018-0204-yAbstractModels developed for gross domestic product (GDP) growth forecasting tend to be extremely complex, relying on a large number of variables and parameters. Such complexity is not always to the benefit of the accuracy of the forecast. Economic complexity constitutes a framework that builds on methods developed for the study of complex systems to construct approaches that are less demanding than standard macroeconomic ones in terms of data requirements, but whose accuracy remains to be systematically benchmarked. Here we develop a forecasting scheme that is shown to outperform the accuracy of the five-year forecast issued by the International Monetary Fund (IMF) by more than 25% on the available data. The model is based on effectively representing economic growth as a two-dimensional dynamical system, defined by GDP per capita and ‘fitness’, a variable computed using only publicly available product-level export data. We show that forecasting errors produced by the method are generally predictable and are also uncorrelated to IMF errors, suggesting that our method is extracting information that is complementary to standard approaches. We believe that our findings are of a very general nature and we plan to extend our validations on larger datasets in future works. Currently, economists use a variety of models to produce GDP estimates, which are often used by law and policy makers to inform decisions about future events. Such models typically require a host of variable inputs and are quite complex. In sharp contrast, the estimates made by the Italian team used just two variables: current GDP and another they describe as “economic fitness.”The researchers calculated a number for a given country’s economic fitness using physics principles applied to export products. Such factors as diversification and the complexity of the products were taken into account, offering a means of gauging the relative strength of a given economy. The idea was to rate a country’s economic strength—the wider the range of products being exported and the more complex they were, the more likely GDP was likely to grow—and used that to forecast future prosperity.The team reports that they have been running their models for approximately six years—long enough for them to see how well their estimates matched actual GDP numbers over time. They report that their estimates were on average 25 percent more accurate than were those made by the International Monetary Fund. They report also that their models accurately predicted the booming Chinese economy in 2015 when more traditional models suggested the country was headed for a slowdown.The researchers explain that the field of economic complexity involves studying the behavior of economies over time and the factors that cause them to change. Doing so includes using tools such as those that have been developed to measure turbulence in fluids and traffic jams. The philosophy of such research, they explain, revolves around the idea that complex systems with a large number of elements that interact in non-linear ways tend to have emergent properties. Learning to understand such properties, they further note, can offer insights into relationships such as the one between exports and GDP trends. A team of Italian physicists has used economic complexity theory to produce five-year gross domestic product (GDP) estimates for several countries around the world. In their paper published in the journal Nature Physics, Andrea Tacchella, D. Mazzilli and Luciano Pietronero describe how they applied the theory to economic forecasting and how well it has worked thus far. © 2018 Phys.org Explore further