']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();

Adventure Land

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.



Stay up-to-date on all of humanity’s attempts to understand and experience the cosmos. What does the future of space travel look like? Is it ethical to colonize Mars? Why isn’t Pluto a planet? What are the stars up to? What made the moon, and what is it made of? Are we ever going to find life on other worlds? And what exactly is life, anyway? Meet the people—and robots—working to answer these colossal cosmic questions.


If it feels like the planet is under attack from all fronts, well, that's understandable. Our weather is turning more and more wild, our oceans are polluted with debris both massive and microscopic, and ecosystems everywhere are morphing into something new. But knowledge is the best defense. Learn what threatens the future of the planet—and how you can do your part to protect it

Humans are hardly the only interesting members of the animal kingdom. Research on the bodies and behaviors of our furry (and creepy and crawly and slimy and slithery) cousins can help scientists learn more about our own species’ evolution and cognition. And even when they don’t help unlock the ancient secrets of human ancestry, some animals are just too cute—or weird, or gross, or terrifying—not to get to know a little better. Go ahead: take a walk on the wild side.

Your Field  

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



The water cycle (known scientifically as the hydrologic cycle) refers to the continuous exchange of water within the hydrosphere, between the atmosphere, soil water, surface water, groundwater, and plants.
Water moves perpetually through each of these regions in the water cycle consisting of following transfer processes:
evaporation from oceans and other water bodies into the air and transpiration from land plants and animals into air.
precipitation, from water vapor condensing from the air and falling to earth or ocean.
runoff from the land usually reaching the sea.
Most water vapor over the oceans returns to the oceans, but winds carry water vapor over land at the same rate as runoff into the sea, about 47 Tt per year. Over land, evaporation and transpiration contribute another 72 Tt per year. Precipitation, at a rate of 119 Tt per year over land, has several forms: most commonly rain, snow, and hail, with some contribution from fog and dew.[31] Dew is small drops of water that are condensed when a high density of water vapor meets a cool surface. Dew usually forms in the morning when the temperature is the lowest, just before sunrise and when the temperature of the earth's surface starts to increase.[32] Condensed water in the air may also refract sunlight to produce rainbows.
Water runoff often collects over watersheds flowing into rivers. A mathematical model used to simulate river or stream flow and calculate water quality parameters is a hydrological transport model. Some water is diverted to irrigation for agriculture. Rivers and seas offer opportunity for travel and commerce. Through erosion, runoff shapes the environment creating river valleys and deltas which provide rich soil and level ground for the establishment of population centers. A flood occurs when an area of land, usually low-lying, is covered with water. It is when a river overflows its banks or flood comes from the sea. A drought is an extended period of months or years when a region notes a deficiency in its water supply. This occurs when a region receives consistently below average precipitation.


Water is a transparent and nearly colorless chemical substance that is the main constituent of Earth's streams, lakes, and oceans, and the fluids of most living organisms. Its chemical formula is H2O, meaning that each of its molecules contains one oxygen and two hydrogen atoms that are connected by covalent bonds. Strictly speaking, water refers to the liquid state of a substance that prevails at standard ambient temperature and pressure; but it often refers also to its solid state (ice) or its gaseous state (steam or water vapor). It also occurs in nature as snow, glaciers, ice packs and icebergs, clouds, fog, dew, aquifers, and atmospheric humidity.
Water covers 71% of the Earth's surface.[1] It is vital for all known forms of life. On Earth, 96.5% of the planet's crust water is found in seas and oceans, 1.7% in groundwater, 1.7% in glaciers and the ice caps of Antarctica and Greenland, a small fraction in other large water bodies, 0.001% in the air as vapor, clouds (formed of ice and liquid water suspended in air), and precipitation.[2][3] Only 2.5% of this water is freshwater, and 98.8% of that water is in ice (excepting ice in clouds) and groundwater. Less than 0.3% of all freshwater is in rivers, lakes, and the atmosphere, and an even smaller amount of the Earth's freshwater (0.003%) is contained within biological bodies and manufactured products.[2] A greater quantity of water is found in the earth's interior.[4]
Water on Earth moves continually through the water cycle of evaporation and transpiration (evapotranspiration), condensation, precipitation, and runoff, usually reaching the sea. Evaporation and transpiration contribute to the precipitation over land. Large amounts of water are also chemically combined or adsorbed in hydrated minerals.
Safe drinking water is essential to humans and other lifeforms even though it provides no calories or organic nutrients. Access to safe drinking water has improved over the last decades in almost every part of the world, but approximately one billion people still lack access to safe water and over 2.5 billion lack access to adequate sanitation.[5] However, some observers have estimated that by 2025 more than half of the world population will be facing water-based vulnerability.[6] A report, issued in November 2009, suggests that by 2030, in some developing regions of the world, water demand will exceed supply by 50%.[7]
Water plays an important role in the world economy. Approximately 70% of the freshwater used by humans goes to agriculture.[8] Fishing in salt and fresh water bodies is a major source of food for many parts of the world. Much of long-distance trade of commodities (such as oil and natural gas) and manufactured products is transported by boats through seas, rivers, lakes, and canals. Large quantities of water, ice, and steam are used for cooling and heating, in industry and homes. Water is an excellent solvent for a wide variety of chemical substances; as such it is widely used in industrial processes, and in cooking and washing. Water is also central to many sports and other forms of entertainment, such as swimming, pleasure boating, boat racing, surfing, sport fishing, and diving.

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



Humans are hardly the only interesting members of the animal kingdom. Research on the bodies and behaviors of our furry (and creepy and crawly and slimy and slithery) cousins can help scientists learn more about our own species’ evolution and cognition. And even when they don’t help unlock the ancient secrets of human ancestry, some animals are just too cute—or weird, or gross, or terrifying—not to get to know a little better. Go ahead: take a walk on the wild side.


An ocean (from Ancient Greek Ὠκεανός, transc. Okeanós, the sea of classical antiquity[1]) is a body of saline water that composes much of a planet's hydrosphere.[2] On Earth, an ocean is one of the major conventional divisions of the World Ocean. These are, in descending order by area, the Pacific, Atlantic, Indian, Southern (Antarctic), and Arctic Oceans.[3][4] The word sea is often used interchangeably with "ocean" in American English but, strictly speaking, a sea is a body of saline water (generally a division of the world ocean) partly or fully enclosed by land.[5]
Saline water covers approximately 360,000,000 km2 (140,000,000 sq mi) and is customarily divided into several principal oceans and smaller seas, with the ocean covering approximately 71% of Earth's surface and 90% of the Earth's biosphere.[6] The ocean contains 97% of Earth's water, and oceanographers have stated that less than 5% of the World Ocean has been explored.[6] The total volume is approximately 1.35 billion cubic kilometers (320 million cu mi) with an average depth of nearly 3,700 meters (12,100 ft).[7][8][9]
As the world ocean is the principal component of Earth's hydrosphere, it is integral to life, forms part of the carbon cycle, and influences climate and weather patterns. The world ocean is the habitat of 230,000 known species, but because much of it is unexplored, the number of species that exist in the ocean is much larger, possibly over two million.[10] The origin of Earth's oceans is unknown; oceans are thought to have formed in the Hadean eon and may have been the impetus for the emergence of life.
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, although there is evidence for the existence of oceans elsewhere in the Solar System. Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of Europa is estimated to have over twice the water volume of Earth. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.[


You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts


When British photographer Lara Maiklem heard tens of thousands of sea creatures washed up on a beach near her hometown of Kent, England, over the weekend she had to see the scene for herself. So, she woke her 5-year-old twins in time to catch the tide.

Maiklem described the scene as "shocking" and "sad," but at the same time, she had to admit it was an "incredible" sight. In fact, it was "almost biblical in scale," she added.

"There were thousands upon thousands of starfish, with crabs, sea urchins, fish and sea anenomies mixed in with them," Maiklem told Fox News. "Someone even found a lobster."

The creatures covered the sandy beach like a thick blanket. Maiklem and her two kids tried to rescue as many fish as they could, tossing them one by one back into the sea.

The animals were the victims of a cold spell – what Maiklem called a "beast from the east" – that hit the U.K. last week. Similar scenes were reported down the coast, Yorkshire Wildlife Trust, a wildlife conservation charity, said in a news release on Wednesday.

“There was a three degree drop in sea temperature last week which will have caused animals to hunker down and reduce their activity levels," Bex Lynam, North Sea marine advocacy officer for Yorkshire Wildlife Trust, said in a statement provided to Fox News. "This makes them vulnerable to rough seas – they became dislodged by large waves and washed ashore when the rough weather kicked in."

Crabs, starfish and mussels were "ankle-deep" in some places, though at least two lucky marine species seemed to survive the freeze: lobsters and crabs.

“Lobsters and crabs can survive out of water, unlike the majority of the other creatures washed up," Lynam told Fox News. "Also they have a hard exoskeleton, which offers them a certain level of protection when being thrown around by the sea.”

Maiklem said she also found several dead sea birds washed up along the same stretch. 

"I understand it is a natural phenomena," Maiklem said. "I'm pleased I went to see it, but I wouldn't like to see it again."

Wildlife officials also hope they won't see a repeat of the disaster.

Yorkshire Wildlife Trust is working with local fisherman to clear the beach and rescue any remaining species that are still alive.

"This area is very important for shellfish and we work alongside fishermen to promote sustainable fisheries and protect reproductive stocks," Lynam said. "It’s worth saving them so that they can be put back into the sea and continue to breed."

Dr. Lissa Batey, senior living seas officer with The Wildlife Trusts, an organization made up of 47 local wildlife trusts in the U.K., said the government can help the creatures by designating more marine conservation zones.

“We can’t prevent natural disasters like this – but we can mitigate against declining marine life and the problems that humans cause by creating enough protected areas at sea and by ensuring that these sites are large enough and close enough to offer fish, crustaceans, dolphins and other marine life the protection they require to withstand natural events such as this," Batey said in a statement.


A deep sea diver has struck gold after unearthing a 17th century chain worth $250,000 from the ocean floor.

Bill Burt, a diver for Mel Fisher's Treasures, spotted the 40-inch gold chain while looking for the wrecked Nuestra Senora de Atocha, which sank off the Florida Keys in a 1622 hurricane.

Shipwreck experts have tentatively valued the piece at around $250,000.

The chain has 55 links, an enamelled gold cross and a two-sided engraved religious medallion featuring the Virgin Mary and a chalice.

On the edges of the cross there is engraved wording thought to be in Latin.

Andy Matroci, captain of Mel Fisher's Treasures salvage vessel, JB Magruder, said the crew had been diving at the North end of the Atocha trail.

On their last trip to the wreck they uncovered 22 silver coins and a cannon ball just east of the site.

They had been hoping to find more coins in the area, Mr Matroci said, but instead found the chain.

'In the nine years I have been running this boat this is the most unique artefact we have brought up,' Mr Matroci said.

The piece is believed to be from the Atocha's infamous treasure trove. 

The company has uncovered half a billion dollars in historic artefacts, gold, silver and emeralds since they began diving the wreck in 1969.

In 1985 - after 15 years of searching - the Fisher crew discovered Atocha's 'mother lode', worth more than $450million.

They unearthed thousands of artefacts, silver coins, gold coins - many in near mint condition, exquisite jewellery sets with precious stones, gold chains, disks, a variety of armaments and even seeds, which later sprouted.

They then faced a legal wrangle with the U.S. Government claimed title to the wreck. Florida state officials seized many of the items the Fisher crew had retrieved. 

But after eight years of litigation, the U.S. Supreme Court ruled in Fisher's favour.

The contents of the ships sterncastle - a wooden, fort-shaped area at the back of ship, have never been recovered.

This is where the wealthy passengers, including nobility and clergy, would have stayed.

Fisher's estimates the treasure in the sterncastle section is worth in the region of half a billion dollars.

The latest find was likely owned by a member of the clergy indicating the company's search for the missing treasure trove could be getting nearer.




You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts




Types of lake according to seasonal variation of lake level and volume
Lakes are informally classified and named according to the seasonal variation in their lake level and volume. Some of the names include:
Ephemeral lake is a short-lived lake or pond.[33] If it fills with water and dries up (disappears) seasonally it is known as an intermittent lake[34] They often fill poljes[35]
Dry lake is a popular name for an ephemeral lake that contains water only intermediately at irregular and infrequent intervals.[25][36]
Perennial lake is a lake that has water in its basin throughout the year and is not subject to extreme fluctuations in level.[25][33]
Playa lake is a typically shallow, intermittent lake that covers or occupies a playa either in wet seasons or in especially wet years but subsequently drying up in an arid or semiarid region.[25][36]
Vlei is a name used in South Africa for a shallow lake which varies considerably in level with the seasons.

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts


The word lake comes from Middle English lake ("lake, pond, waterway"), from Old English lacu ("pond, pool, stream"), from Proto-Germanic *lakō ("pond, ditch, slow moving stream"), from the Proto-Indo-European root *leǵ- ("to leak, drain"). Cognates include Dutch laak ("lake, pond, ditch"), Middle Low German lāke ("water pooled in a riverbed, puddle") as in: de:Moorlake, de:Wolfslake, de:Butterlake, German Lache ("pool, puddle"), and Icelandic lækur ("slow flowing stream"). Also related are the English words leak and leach.
There is considerable uncertainty about defining the difference between lakes and ponds, and no current internationally accepted definition of either term across scientific disciplines or political boundaries exists.[4] For example, limnologists have defined lakes as water bodies which are simply a larger version of a pond, which can have wave action on the shoreline or where wind-induced turbulence plays a major role in mixing the water column. None of these definitions completely excludes ponds and all are difficult to measure. For this reason, simple size-based definitions are increasingly used to separate ponds and lakes. One definition of lake is a body of water of 2 hectares (5 acres) or more in area;[5]:331[6] however, others[who?] have defined lakes as waterbodies of 5 hectares (12 acres) and above,[citation needed] or 8 hectares (20 acres) and above [7] (see also the definition of "pond"). Charles Elton, one of the founders of ecology, regarded lakes as waterbodies of 40 hectares (99 acres) or more.[8] The term lake is also used to describe a feature such as Lake Eyre, which is a dry basin most of the time but may become filled under seasonal conditions of heavy rainfall. In common usage, many lakes bear names ending with the word pond, and a lesser number of names ending with lake are in quasi-technical fact, ponds. One textbook illustrates this point with the following: "In Newfoundland, for example, almost every lake is called a pond, whereas in Wisconsin, almost every pond is called a lake."[9]
One hydrology book proposes to define the term "lake" as a body of water with the following five characteristics:[4]
it partially or totally fills one or several basins connected by straits[4]
has essentially the same water level in all parts (except for relatively short-lived variations caused by wind, varying ice cover, large inflows, etc.)[4]
it does not have regular intrusion of seawater[4]
a considerable portion of the sediment suspended in the water is captured by the basins (for this to happen they need to have a sufficiently small inflow-to-volume ratio)[4]
the area measured at the mean water level exceeds an arbitrarily chosen threshold (for instance, one hectare)[4]
With the exception of the seawater intrusion criterion, the others have been accepted or elaborated upon by other hydrology publications

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



Ponds are used for the provision of fish and other wildlife including waterfowl which a source of food for humans. Pollutants entering ponds are often substantially mitigated by the natural sedimentation and biological activities within the water body. Ponds are also a major contributor to local ecosystem richness and diversity for both plants and animals.[18]
In the Indian subcontinent, Hindu temples usually have a pond nearby so that pilgrims can take baths. These ponds are considered sacred.
In medieval times in Europe, it was typical for many monastery and castles (small, partly self-sufficient communities) to have fish ponds. These are still common in Europe and in East Asia (notably Japan), where koi may be kept.
Waste stabilization ponds are used as a low-cost method for wastewater treatment.
In agriculture, treatment ponds may reduce nutrients released downstream from the pond. They may also provide irrigation reservoirs at times of drought.


The technical distinction between a pond and a lake has not been universally standardized. Limnologists and freshwater biologists have proposed formal definitions for pond, in part to include 'bodies of water where light penetrates to the bottom of the waterbody,' 'bodies of water shallow enough for rooted water plants to grow throughout,' and 'bodies of water which lack wave action on the shoreline.' Each of these definitions has met with resistance or disapproval, as the defining characteristics are each difficult to measure or verify. Accordingly, some organizations and researchers have settled on technical definitions of pond and lake which rely on size alone.[4]
Even among organizations and researchers who distinguish lakes from ponds by size alone, there is no universally recognised standard for the maximum size of a pond. The international Ramsar wetland convention sets the upper limit for pond size as 8 hectares (20 acres),[5] but biologists have not universally adopted this convention. Researchers for the British charity Pond Conservation have defined a pond to be 'a man-made or natural waterbody which is between 1 m2 and 20,000 m2 in area (2 ha or ~5 acres), which holds water for four months of the year or more.'[4] Other European biologists have set the upper size limit at 5 ha (12 acres).[6]
In practice, a body of water is called a pond or a lake on an individual basis, as conventions change from place to place and over time. In North America, even larger bodies of water have been called ponds; for example, Walden Pond in Concord, Massachusetts measures 61 acres (25 ha), nearby Spot Pond is 340 acres (140 ha), while in between is Crystal Lake at 33 acres (13 ha). There are numerous examples in other states of bodies of water less than 10 acres (4.0 ha) being called lakes. As the case with Crystal Lake shows, marketing purposes may be the driving factor behind some names.

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



MUTUM PARANA, Brazil — When it is completed in 2015, the Jirau hydroelectric dam will span five miles across the Madeira River, feature more giant turbines than any other dam in the world and hold as much concrete as 47 towers the size of the Empire State Building.
And then there are the power lines, draped along 1,400 miles of forests and fields to carry electricity from here in the center of South America to Brazil’s urban nerve center, Sao Paulo.
Still, it won’t be enough.
The dam and the Santo Antonio complex that is being built a few miles downstream will provide just 5 percent of what government energy planners say the country will need in the next 10 years. So Brazil is building more dams, many more, courting controversy by locating the vast majority of them in the world’s largest and most biodiverse forest.
“The investment to build these plants is very high, and they are to be put in a region which is an icon for environmental preservation, the Amazon,” said Paulo Domingues, energy planning director for the Ministry of Mines and Energy. “So that has worldwide repercussions.”
Between now and 2021, the energy ministry’s building schedule will be feverish: Brazilian companies and foreign conglomerates will put up 34 sizable dams in an effort to increase the country’s capacity to produce energy by more than 50 percent.
The Brazil projects have received less attention than China’s dam-building spree, which has plugged up canyons and bankrolled hydroelectric projects far from Asia.
But Brazil is undertaking one of the world’s largest public works projects, one that will cost more than $150 billion and harness the force of this continent’s great rivers. The objective is to help the country of 199 million people achieve what Brazilian leaders call its destiny: becoming a modern and efficient world-class economy with an ample supply of energy for office towers, assembly lines, refineries and iron works.
“Brazil is a country that’s growing, developing, and it needs energy,” said Eduardo de Melo Pinto, president of Santo Antonio Energia. “And the potential in energy production in Brazil is located, for the most part, in Amazonia. And that’s why this is important for this project to be developed.”
Jirau, Santo Antonio and other projects, though, have until now generated more tension than electricity, raising questions that range from their environmental impact to whether future generations will be saddled with gigantic debt.
International Rivers, a U.S.-based environmental group that has tracked government agencies involved in the dam building, says plans call for 168 dams to be completed by 2021. Most are small dams that will be used to regulate water or to power silos, mineral extraction facilities or industrial complexes. But whether the dams are large or small, homesteaders and Indian leaders say they will cause irreversible changes in a forest that plays a vital role in absorbing the world’s carbon emissions and regulating its climate.
Across Brazil, rivers are being diverted. Canals and dikes are being built. Roads are being paved, and blocks of concrete are being laid across a network of waterways that provides a fifth of the world’s fresh water.
And the big dams will inundate at least 2,500 square miles of forests and fields — an area larger than the state of Delaware.
Environmentalists say the dams are a throwback, not the kind of projects a modern, democratic country should be aggressively pursuing. They say Brazil should focus instead on developing wind and solar energy while overhauling existing plants and instituting other reforms to reduce electrical demand.
“This is a sort of 1950s development mentality that often proceeds in a very authoritarian way, in terms of not respecting human rights, not respecting environmental law, not really looking at the alternatives,” said Brent Millikan, Amazon program director in Brazil for International Rivers.
Lives torn asunder
In a swath of Rondonia state, along the BR-364 highway, several residents said the dams had uprooted communities of subsistence farmers and fishermen, unalterably changing their way of life for the worse.
Telma Santos Pinto, 53, said she had to leave her home of 36 years, receiving $18,000 as compensation from the companies building Jirau.
“The compensation was very, very low,” she said. “And we were obligated to accept that.”
Her town, Mutum Parana, was left underwater. Most of her neighbors moved into Nova ­Mutum — or New Mutum — a town of 1,600 homes, schools, churches and stores put up by the builders of Jirau.
“We were a community, all of us united,” she said. “All of us helped each other.”
Such laments come up against the hard economic realities that Brazil faces.
By 2021, the economy is projected to expand by 63 percent, the energy ministry says. Hundreds of thousands of people are receiving electricity for the first time each year, and a ballooning middle class is consuming more. Economic planners also predict that Brazil could become the world’s fifth-largest economy in a few years.
No Brazilian leader is more focused on that objective than President Dilma Rousseff, a former 1970s-era guerrilla who was energy minister in her predecessor’s government. She says that Brazil is “privileged” to have so much water and that it is logical for the country to rely heavily on hydropower.
She counters environmentalists by arguing that Brazil’s energy mix — the country also relies on solar, wind and biomass, all renewable energy sources — is among the world’s cleanest.
“Economic growth is not contrary to the best environmental practices,” Rousseff said at the inauguration of one huge dam in October. “We are proving that it’s possible to increase electrical generation and at the same time respect the environment.”
Priority projects
To be sure, the footprints of the new dams will be smaller than those of the past.
The proposed Belo Monte project on the Xingu, a huge dam that has galvanized environmentalists and Hollywood luminaries, will flood fives times less land than the 29-year-old Tucurui dam, ­Brazil’s second-biggest, said Domingues, the energy ministry planner.
The Jirau dam includes ladders to help migrating fish make it upstream and conservation programs for animal and bird life.
Gil Maranhão, the Jirau dam’s communications and business development director, said “the real deforestation is maybe zero” because the flooding has taken out cattle ranches and small subsistence farms rather than large swaths of forest.
He said the $7.7 billion project has created jobs and prompted the consortium building the dam to spend $600 million on social programs and housing for the 350 families that had to be relocated.
“The impacted population move from slums without electricity, without sewage, and we put them in new cities built for them,” he said, pointing to Nova Mutum.
Jose Gomes, a civil engineer who is the project’s institutional director, said rigid requirements ensured that the environmental impacts of Jirau and Santo Antonio were minimized. Building dams, he said, here and elsewhere, is a major priority that will not be derailed.
“Brazil needs two hydroelectric dams like this to provide power each and every year,” Gomes said. “We’re going to have energy guaranteed.”
Cranes stretched into the sky and steel reinforcements were going up. Although the turbines were not yet operating, the power houses were firmly installed. Upriver, more than 100 square miles of land were underwater.
It was clear that the mighty Madeira, the biggest tributary of the Amazon, had been tamed.

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



The troposphere starts at the Earth's surface and extends 8 to 14.5 kilometers high (5 to 9 miles). This part of the atmosphere is the most dense. Almost all weather is in this region.

The stratosphere starts just above the troposphere and extends to 50 kilometers (31 miles) high. The ozone layer, which absorbs and scatters the solar ultraviolet radiation, is in this layer.

The mesosphere starts just above the stratosphere and extends to 85 kilometers (53 miles) high. Meteors burn up in this layer

The thermosphere starts just above the mesosphere and extends to 600 kilometers (372 miles) high. Aurora and satellites occur in this layer.

The ionosphere is an abundant layer of electrons and ionized atoms and molecules that stretches from about 48 kilometers (30 miles) above the surface to the edge of space at about 965 km (600 mi), overlapping into the mesosphere and thermosphere. This dynamic region grows and shrinks based on solar conditions and divides further into the sub-regions: D, E and F; based on what wavelength of solar radiation is absorbed. The ionosphere is a critical link in the chain of Sun-Earth interactions. This region is what makes radio communications possible.

This is the upper limit of our atmosphere. It extends from the top of the thermosphere up to 10,000 km (6,200 mi).

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts


New economic developments in the Arctic, such as trans-Arctic shipping and oil exploitation, will bring along unprecedented risks of marine oil spills. The world is therefore calling for a thorough understanding of the resilience and "self-cleaning" capacity of Arctic ecosystems to recover from oil spills.
Although numerous efforts are put into cleaning up large oil spills, only 15 to 25% of the oil can be effectively removed by mechanical methods. This was the case in major oil disasters such as the Exxon Valdez spill in Prince William Sound, Alaska, and the Deepwater Horizon in the Gulf of Mexico. Future spill will be no different. Oil-eating microbes played the major role in degrading the oil and reducing the impact of the spilled oil during these past oil disasters.
"We are now presenting a first assessment of the microbial degradation potential in seawaters off Greenland," postdoc Leendert Vergeynst, Arctic Research Centre at Aarhus University, explains.
The research group has identified six factors challenging the microbes in Arctic seas.
Low temperatures, sea ice and few nutrients
Low temperature changes the chemical properties of spilled oil and slows down biodegradation. For example, cold oil is more viscous, which hampers oil dispersion. The efficiency of microbial degradation is decreased when oil is not dispersed in small droplets.
Waves also plays an important role in breaking the oil into droplets. However, where there is sea ice, there are much less or no waves.
The Arctic is generally an environment with very low amounts of nutrients such as nitrogen and phosphorus. These nutrients are not present in the oil and oil-eating bacteria therefor need to find them in the water. Few nutrients result in reduced activity of the oil-eating bacteria.
Particle formation, sunlight and adaptation
Massive phytoplankton (algae) blooms and suspended mineral particles released by glaciers occur during the Arctic spring and summer. The concentrations of particles from glacier outlets and algae blooms in Arctic waters can be magnitudes higher than in the Gulf of Mexico, where phytoplankton, particles and oil droplets were sticking together and sank to the seafloor, forming a "dirty blizzard" during the Deepwater Horizon oil spills in 2010. Microbial degradation of oil on the seafloor is much slower than in the water column.
The 24-h sunlight during the Arctic summer may help the microbes to break up oil molecules into smaller pieces. However, it may also make the oil compounds more toxic for aquatic organisms. We still need a lot of knowledge to properly understand the effect of sunlight on oil spills in Arctic ecosystems.
Regular small oil spills in other marine waters have adapted ('learned') microbes to eat oil molecules. However, the Arctic is still a very pristine environment. The researchers are therefore currently investigating if the microbial populations present in the Arctic have adapted to degrading oil compounds.
"We are especially concerned that the most toxic molecules in the oil, such as polycyclic aromatic hydrocarbons, may be the most difficult to degrade" says Leendert Vergeynst.

Cyanobacteria -- which propel the ocean engine and help sustain marine life -- can shift their colour like chameleons to match different coloured light across the world's seas, according to research by an international collaboration including the University of Warwick.
The researchers have shown that Synechococcus cyanobacteria -- which use light to capture carbon dioxide from the air and produce energy for the marine food chain -- contain specific genes which alters their pigmentation depending on the type of light in which they float, allowing them to adapt and thrive in any part of the world's oceans.
"Blue light is most prevalent in the open oceans, as it penetrates into deep waters -- whereas in warm equatorial and coastal waters there is more green light, and in estuaries the light is often red," explains David Scanlan, who is Professor in Marine Microbiology in the University of Warwick's School of Life Sciences.
These specific 'chromatic adaptor' genes are abundant in ocean dwelling Synechococcus -- enabling these colour-shifting microorganisms to change their pigment content in order to survive and photosynthesise in ocean waters, especially when the light quality changes from blue to green.
Professor Scanlan commented on the significance of the research:
"Finding Synechococcus cells capable of dynamically changing their pigment content in accordance with the ambient light colour -- abundant in ocean ecosystems, making them planktonic 'chameleons' -- gives us a much deeper understanding of those processes essential to keep the ocean 'engine' running.
"This will help improve how we look after our waters -- and will allow us to better predict how oceans will react in the future to a changing climate with increasing levels of carbon dioxide in the atmosphere."
The researchers made their discovery using data from the Tara Oceans expedition -- which took seawater samples from ocean waters all over the world.
From this data, Professor Scanlan and colleagues analysed specific gene sequences from Synechococcus in the different samples, identifying particular 'chromatic adaptor' genes in bacteria living thousands of miles apart.
This discovery represents a major breakthrough in our understanding of these organisms, which are key primary producers and potentially excellent bio-indicators of climate change.

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



Since the Kobe Ocean Bottom Exploration Center (KOBEC) was established in 2015, the Center has carried out three survey voyages to the Kikai Caldera, south of Japan's main islands. Based on these voyages, researchers have confirmed that a giant lava dome was created after the caldera-forming supereruption 7300 years ago. The dome is in the world's largest class of post-caldera volcano, with a volume of over 32 cubic kilometers. The composition of this lava dome is different from the magma that caused the giant caldera to erupt -- it shows the same chemical characteristics as the current post-caldera volcano on the nearby Satsuma Iwo-jima Island. It is possible that currently a giant magma buildup may exist under the Kikai Caldera.
These findings were published in the online edition of Scientific Reports on February 9.
There is roughly a 1% chance of a giant caldera-forming eruption occurring within the Japanese archipelago during the next 100 years. An eruption like this would see over 40 cubic kilometers of magma released in one burst, causing enormous damage. The mechanism behind this and how to predict this event are urgent questions.
Researchers equipped training ship Fukae Maru, part of the Kobe University Graduate School of Maritime Sciences, with the latest observation equipment to survey the Kikai Caldera. They chose this volcano for two main reasons. Firstly, for land-based volcanoes it is hard to carry out large-scale observations using artificial earthquakes because of the population density, and it is also difficult to detect giant magma buildups with precise visualization because they are often at relatively low depths (roughly 10km). Secondly, the Kikai Caldera caused the most recent giant caldera-forming eruption in the Japanese archipelago (7300 years ago), and there is a high possibility that a large buildup of magma may exist inside it.
During the three survey voyages, KOBEC carried out detailed underwater geological surveys, seismic reflection, observations by underwater robots, samples and analysis of rocks, and observations using underwater seismographs and electromagnetometers.
In their upcoming March 2018 voyage, researchers plan to use seismic reflection and underwater robots to clarify the formation process of the double caldera revealed in previous surveys and the mechanism that causes a giant caldera eruption.
They will also use seismic and electromagnetic methods to determine the existence of a giant magma buildup, and in collaboration with the Japan Agency for Marine-Earth Science and Technology will carry out a large-scale underground survey, attempting to capture high-resolution visualizations of the magma system within the Earth's crust (at a depth of approximately 30km). Based on results from these surveys, the team plans to continue monitoring and aims to pioneer a method for predicting giant caldera-forming eruptions.
Formation of metallic ore deposits are predicted to accompany the underwater hydrothermal activity, so the team also plan to evaluate these undersea resources.
1. Caldera: a depression in the land formed when a volcano erupts
2. Giant caldera-forming eruption: an eruption that releases a large amount of magma (>40km3) and forms a large-scale caldera. This sort of eruption has occurred ten times on the Japanese archipelago in the last 120,000 years. Giant caldera volcanoes are concentrated in Kyushu and Hokkaido.
3. Seismic reflection survey: Causing an artificial earthquake with air guns or similar, receiving the seismic waves that have reflected or refracted below ground, and estimating the subsurface structure.


Some deep-sea skates -- cartilaginous fish related to rays and sharks -- use volcanic heat emitted at hydrothermal vents to incubate their eggs, according to a new study in the journal Scientific Reports. Because deep-sea skates have some of the longest egg incubation times, estimated to last more than four years, the researchers believe the fish are using the hot vents to accelerate embryo development. This the first time such behavior has been seen in marine animals.
"Hydrothermal vents are extreme environments, and most animals that live there are highly evolved to live in this environment," said Charles Fisher, Professor and Distinguished Senior Scholar of Biology at Penn State and an author of the paper. "This study is one of the few that demonstrates a direct link between the vent environment and animals that live most of their life elsewhere."
Among the least explored and unique ecosystems, deep-sea hydrothermal fields are regions on the sea floor where hot water emerges after being heated in the ocean crust. In their study, an international team of researchers, led by Pelayo Salinas-de-León of the Charles Darwin Research Station, used a remotely operated underwater vehicle (ROV) to survey in and around an active hydrothermal field located in the Galapagos archipelago, 28 miles north of Darwin Island.
"The first place the ROV landed on the sea floor was on a ridge, in the plume of a nearby hydrothermal vent that we had specifically come to investigate -- a black smoker," said Fisher. "When we panned the camera down, we found something we did not expect: These giant egg cases, also known as mermaid purses. And we found several layers of them, indicating that whatever was laying these eggs had been coming back to this spot for many years to lay them. As the dive progressed, we saw more and more of these egg cases and realized that this was not the result of a single animal, but rather a behavior shared by many individuals. "
The researchers found 157 egg cases in the area and collected four with the ROV's robotic arm. DNA analysis revealed that the egg cases belonged to the skate species Bathyraja spinosissima, one of the deepest-living species of skates that is not typically thought to occur near the vents. The majority -- 58 percent -- of the observed egg cases were found within about 65 feet of the chimney-like black smokers, the hottest kind of hydrothermal vents, and over 89 percent had been laid in places where the water was hotter than average. The researchers believe that the warmer temperatures in the area could reduce the typically years-long incubation time of the eggs.
While several species of reptiles and birds lay their eggs in locations that optimize soil temperatures, only two other groups of animals are known to use volcanically heated soils: the modern-day Polynesian megapode -- a rare bird native to Tonga -- and a group of nest-building neosauropod dinosaurs from the Cretaceous Period.
Because of their long lifespan and slow rate of development, deep-water skates may be particularly sensitive to threats to their environment, including fisheries expanding into deeper waters and sea-floor mining. Understanding the development and habitat of the skates is vital for developing effective conservation strategies for this poorly understood species.
"The deep sea is full of surprises," said Fisher. "I've made hundreds of dives, both in person and virtually, to deep sea hydrothermal vents and have never seen anything like this."

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



Large areas of the Earth's surface are experiencing rising maximum temperatures, which affect virtually every ecosystem on the planet, including ice sheets and tropical forests that play major roles in regulating the biosphere, scientists have reported.
An analysis of records from NASA's Aqua satellite between 2003 and 2014 shows that spikes in maximum surface temperatures occurred in the tropical forests of Africa and South America and across much of Europe and Asia in 2010 and in Greenland in 2012. The higher temperature extremes coincided with disruptions that affected millions of people: severe droughts in the tropics and heat waves across much of the northern hemisphere. Maximum temperature extremes were also associated with widespread melting of the Greenland ice sheet.
The satellite-based record of land surface maximum temperatures, scientists have found, provides a sensitive global thermometer that links bulk shifts in maximum temperatures with ecosystem change and human well-being.
Those are among the conclusions reported in the Journal of Applied Meteorology and Climatology by a team of scientists from Oregon State University, the University of Maryland, the University of Montana and the Pacific Northwest Research Station of the U.S. Forest Service.
Land surface temperature measures the heat radiated by land and vegetation. While weather stations typically measure air temperatures just above the surface, satellites record the thermal energy emitted by soil, rock, pavement, grass, trees and other features of the landscape. Over forests, for example, the satellite measures the temperature of the leaves and branches of the tree canopy.
"Imagine the difference between the temperature of the sand and the air at the beach on a hot, summer day," said David Mildrexler, the lead author who received his Ph.D. from the College of Forestry at Oregon State last June. "The air might be warm, but if you walk barefoot across the sand, it's the searing hot surface temperature that's burning your feet. That's what the satellites are measuring."
The researchers looked at annual maximum land surface temperatures averaged across 8-day periods throughout the year for every 1-square kilometer (247 acres) pixel on Earth. NASA collects surface temperature measurements with an instrument known as MODIS (Moderate Resolution Imaging Spectroradiometer) on two satellites (Aqua and Terra), which orbit the Earth from north to south every day. Mildrexler and his team focused on the annual maximum for each year as recorded by the Aqua satellite, which crosses the equator in the early afternoon as temperatures approach their daily peak. Aqua began recording temperature data in the summer of 2002.
"As anyone who pays attention to the weather knows, the Earth's temperature has incredible variability," said Mildrexler. But across the globe and over time, the planet's profile of high temperatures tends to be fairly stable from year to year. In fact, he said, the Earth has a maximum temperature profile that is unique, since it is strongly influenced by the presence of life and the overall frequency and distribution of the world's biomes. It was the discovery of a consistent year-to-year profile that allowed the researchers to move beyond a previous analysis, in which they identified the hottest spots on Earth, to the development of a new global-change indicator that uses the entire planet's maximum land surface temperatures.
In their analysis, the scientists mapped major changes in 8-day maximum land surface temperatures over the course of the year and examined the ability of such changes to detect heat waves and droughts, melting ice sheets and tropical forest disturbance. In each case, they found significant temperature deviations during years in which disturbances occurred. For example, heat waves were particularly severe, droughts were extensive in tropical forests, and melting of the Greenland ice sheet accelerated in association with shifts in the 8-day maximum temperature.
In 2010, for example, one-fifth of the global land area experienced extreme maximum temperature anomalies that coincided with heat waves and droughts in Canada, the United States, Northern Europe, Russia, Kazakhstan, Mongolia and China and unprecedented droughts in tropical rainforests. These events were accompanied by reductions in ecosystem productivity, the researchers wrote, in addition to wildfires, air pollution and agricultural losses.
"The maximum surface temperature profile is a fundamental characteristic of the Earth system, and these temperatures can tell us a lot about changes to the globe," said Mildrexler. "It's clear that the bulk shifts we're seeing in these maximum temperatures are correlated with major changes to the biosphere. With global temperatures projected to continue rising, tracking shifts in maximum temperature patterns and the consequences to Earth's ecosystems every year globally is potentially an important new means of monitoring biospheric change."
The researchers focused on satellite records for land surfaces in daylight. NASA also produces satellite-based temperature records for the oceans and for nighttime portions of the globe.

You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



Athletes who suffer life-threatening heat stroke should be cooled on site before they are taken to the hospital, according to an expert panel's report published in the journal Prehospital Emergency Care.


The principle of "cool first, transport second" differs from the usual practice of calling 911 and getting to the hospital as soon as possible.

The article was published online Jan. 16, 2018.

"In the case of heat stroke, the definitive care is cooling, which may best be performed immediately onsite before transport," said Jolie C. Holschen, MD, FACEP, a Loyola Medicine emergency medicine physician and co-author of the expert panel's consensus statement. First author of the statement is Luke Beval, MS, of the Korey Stringer Institute at the University of Connecticut.

Exertional heat stroke is one of the most common causes of death in athletes. Although it can happen in cooler temperatures, it typically occurs in warm weather during events such as marathons and preseason football practices.

The athlete shows central nervous system disturbances such as confusion, irritability or irrational behavior, which may culminate in a collapse or loss of consciousness. There is a common misconception that the athlete will have stopped sweating, have hot skin or be unconscious, but none of these symptoms are required for heat stroke.

The Korey Stringer Institute organized a meeting of national experts in emergency medicine and sports medicine to identify best practices for treating exertional heat stroke in prehospital settings. The institute is named after a Minnesota Viking football player who died from heat stroke during a sweltering training camp.

The panel recommended rapidly cooling the body to less than 104.5 degrees F (the threshold for critical cell damage) within 30 minutes of the time of collapse. Cooling should end once the body temperature drops to about 101.5 degrees F.

The best cooling method is to immerse the athlete in a tub of cold water. If a tub isn't available, a tarp, shaped like a taco and filled with cold water, could be tried. (This is known as tarp-assisted cooling.) Less effective cooling methods include cold-water dousing, cold showers, fans and icepacks.

"Transportation of an exertional heat stroke patient should occur only if it is impossible to cool adequately onsite or after adequate cooling has been verified by a body temperature assessment," the expert panel wrote. If a patient cannot be cooled onsite, paramedics should try the most aggressive cooling methods possible in the ambulance, such as continuously applying cold wet towels.

The panel's paper is titled "Consensus Statement -- Prehospital Care of Exertional Heat Stroke." The goal of the consensus statement is to raise awareness of the need to implement the most rapid method of cooling, and to do so immediately in the field when resources are available, Dr. Holschen said.

"When doctors serve in sporting events as medical directors and team physicians, they must be prepared to cool onsite," Dr. Holschen said. "We also want to give emergency medical services the leeway to cool the patient before transport, when superior cooling methods are available. EMS directors should build this into their protocols and standard operating procedures."

Dr. Holschen is an associate professor in the department of emergency medicine of Loyola University Chicago Stritch School of medicine. She is a fellow of the American College of Emergency Physicians and is board certified in emergency medicine and in the sports medicine subspecialty of emergency medicine.


Meteorology is the scientific study of the atmosphere that focuses on weather processes and forecasting.

Meteorological phenomena are observable weather events which illuminate and are explained by the science of meteorology.

Those events are bound by the variables that exist in Earth's atmosphere.

They are temperature, pressure, water vapor, and the gradients and interactions of each variable, and how they change in time.

The majority of Earth's observed weather is located in the troposphere.

Although meteorologists now rely heavily on computer models (numerical weather prediction), it is still relatively common to use techniques and conceptual models that were developed before computers were powerful enough to make predictions accurately or efficiently.


You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts




A new study has found that levels of commercial fish stocks could be harmed as rising sea temperatures affect their source of food.


University of Adelaide scientists have demonstrated how climate change can drive the collapse of marine "food webs."

Published in the open access journal PLOS Biology, the study's lead author PhD student, Hadayet Ullah and supervisors Professor Ivan Nagelkerken and Associate Professor Damien Fordham of the University's Environment Institute, show that increased temperatures reduce the vital flow of energy from the primary food producers at the bottom (e.g. algae), to intermediate consumers (herbivores), to predators at the top of marine food webs.

Such disturbances in energy transfer can potentially lead to a decrease in food availability for top predators, which in turn, can lead to negative impacts for many marine species within these food webs.

"Healthy food webs are important for maintenance of species diversity and provide a source of income and food for millions of people worldwide," said Mr Ullah. "Therefore, it is important to understand how climate change is altering marine food webs in the near future."

Twelve large 1,600 litre tanks were constructed to mimic predicted conditions of elevated ocean temperature and acidity caused by increasing human greenhouse gas emissions. The tanks harboured a range of species including algae, shrimp, sponges, snails, and fishes.

The mini-food web was maintained under future climate conditions for six months, during which time the researchers measured the survival, growth, biomass, and productivity of all animals and plants, and used these measurements in a sophisticated food web model.

"Whilst climate change increased the productivity of plants, this was mainly due to an expansion of cyanobacteria (small blue-green algae)," said Mr Ullah. "This increased primary productivity does not support food webs, however, because these cyanobacteria are largely unpalatable and they are not consumed by herbivores."

Understanding how ecosystems function under the effects of global warming is a challenge in ecological research. Most research on ocean warming involves simplified, short-term experiments based on only one or a few species.

"If we are to adequately forecast the impacts of climate change on ocean food webs and fisheries productivity, we need more complex and realistic approaches, that provide more reliable data for sophisticated food web models," said project leader Professor Nagelkerken.

Marine ecosystems are already experiencing major impacts from global warming, making it vital to better understand how these results can be extrapolated to ecosystems worldwide.



If you want to do something about global warming, look under your feet. Managed well, soil's ability to trap carbon dioxide is potentially much greater than previously estimated, according to Stanford researchers who claim the resource could "significantly" offset increasing global emissions. They call for a reversal of federal cutbacks to related research programs to learn more about this valuable resource.


The work, published in two overlapping studies Oct. 5 in Annual Review of Ecology, Evolution and Systematics and Global Change Biology, emphasizes the need for more research into how soil -- if managed well -- could mitigate a rapidly changing climate.

"Dirt is not exciting to most people," said earth system science professor Rob Jackson, lead author of the Annual Review of Ecology, Evolution and Systematics article and coauthor of the Global Change Biology paper. "But it is a no-risk climate solution with big cobenefits. Fostering soil health protects food security and builds resilience to droughts, floods and urbanization."

Humble, yet mighty

Organic matter in soil, such as decomposing plant and animal residues, stores more carbon than do plants and the atmosphere combined. Unfortunately, the carbon in soil has been widely lost or degraded through land use changes and unsustainable forest and agricultural practices, fires, nitrogen deposition and other human activities. The greatest near-term threat comes from thawing permafrost in Earth's northern reaches, which could release massive amounts of carbon into the atmosphere.

Despite these risks, there is also great promise, according to Jackson and Jennifer Harden, a visiting scholar in Stanford's School of Earth, Energy & Environmental Sciences and lead author of the Global Change Biology paper.

Improving how the land is managed could increase soil's carbon storage enough to offset future carbon emissions from thawing permafrost, the researchers find. Among the possible approaches: reduced tillage, year-round livestock forage and compost application. Planting more perennial crops, instead of annuals, could store more carbon and to reduce erosion by allowing roots to reach deeper into the ground.

Jackson, Harden and their colleagues also found that about 70 percent of all sequestered carbon in the top meter of soil is in lands directly affected by agriculture, grazing or forest management -- an amount that surprised the authors.

"I think if beer bets were involved, we all would have lost," Harden said of her coauthors.

Jackson and his coauthors found a number of other surprises in their analysis. For example, plant roots are ?ve times more likely than leaves to turn into soil organic matter for the same mass of material. The study also provides the most complete estimate yet of carbon in peatland and permafrost -- almost half of the world's estimated soil carbon.

"Retaining and restoring soil organic matter helps farmers grow better crops, purifies our water and keeps the atmosphere cleaner," said Jackson, Michelle and Kevin Douglas Provostial Professor in the School of Earth, Energy & Environmental Sciences.

Overcoming obstacles

The Jackson-led study describes an unexpectedly large stock of potentially vulnerable carbon in the northern taiga, an ecosystem that is warming more rapidly than any other. These carbon stocks are comparatively poorly mapped and understood.

The study warns of another danger: overestimating how the organic matter in soil is distributed. Jackson and his coauthors calculate there may be 25-30 percent less than currently estimated due to constraints from bedrock, a factor previously not analyzed in published scientific research.

While scientists are now able to remotely map and monitor environmental changes on Earth's surface, they still don't have a strong understanding of the interactions among biological, chemical and physical processes regulating carbon in soils. This knowledge is critical to understanding and predicting how the carbon cycle will respond to changes in the ecosystem, increasing food production and safeguarding natural services we depend on, such as crop pollination and underground water storage.

A rapidly changing climate -- and its effects on soil -- make these scientific advances all the more urgent.

"Soil has changed under our feet," Harden said. "We can't use the soil maps made 80 years ago and expect to find the same answers."

However, funding pressures such as federal cuts to climate science, combined with turnover in science staff and a lack of systematic data threaten progress on soil carbon research. Jackson, Harden and their colleagues call for a renewed push to gather significantly more data on carbon in the soil and learn more about the role it plays in sequestering carbon. They envision an open, shared network for use by farmers, ranchers and other land managers as well as policymakers and organizations that need good data to inform land investments and conservation.

"If we lose momentum on carbon research, it will stifle our momentum for solving both climate and land sustainability problems," Harden said.



A new study has found that levels of commercial fish stocks could be harmed as rising sea temperatures affect their source of food.


University of Adelaide scientists have demonstrated how climate change can drive the collapse of marine "food webs."

Published in the open access journal PLOS Biology, the study's lead author PhD student, Hadayet Ullah and supervisors Professor Ivan Nagelkerken and Associate Professor Damien Fordham of the University's Environment Institute, show that increased temperatures reduce the vital flow of energy from the primary food producers at the bottom (e.g. algae), to intermediate consumers (herbivores), to predators at the top of marine food webs.

Such disturbances in energy transfer can potentially lead to a decrease in food availability for top predators, which in turn, can lead to negative impacts for many marine species within these food webs.

"Healthy food webs are important for maintenance of species diversity and provide a source of income and food for millions of people worldwide," said Mr Ullah. "Therefore, it is important to understand how climate change is altering marine food webs in the near future."

Twelve large 1,600 litre tanks were constructed to mimic predicted conditions of elevated ocean temperature and acidity caused by increasing human greenhouse gas emissions. The tanks harboured a range of species including algae, shrimp, sponges, snails, and fishes.

The mini-food web was maintained under future climate conditions for six months, during which time the researchers measured the survival, growth, biomass, and productivity of all animals and plants, and used these measurements in a sophisticated food web model.

"Whilst climate change increased the productivity of plants, this was mainly due to an expansion of cyanobacteria (small blue-green algae)," said Mr Ullah. "This increased primary productivity does not support food webs, however, because these cyanobacteria are largely unpalatable and they are not consumed by herbivores."

Understanding how ecosystems function under the effects of global warming is a challenge in ecological research. Most research on ocean warming involves simplified, short-term experiments based on only one or a few species.

"If we are to adequately forecast the impacts of climate change on ocean food webs and fisheries productivity, we need more complex and realistic approaches, that provide more reliable data for sophisticated food web models," said project leader Professor Nagelkerken.

Marine ecosystems are already experiencing major impacts from global warming, making it vital to better understand how these results can be extrapolated to ecosystems worldwide.



A new geological record of the Yellowstone supervolcano's last catastrophic eruption is rewriting the story of what happened 630,000 years ago and how it affected Earth's climate. This eruption formed the vast Yellowstone caldera observed today, the second largest on Earth.


Two layers of volcanic ash bearing the unique chemical fingerprint of Yellowstone's most recent super-eruption have been found in seafloor sediments in the Santa Barbara Basin, off the coast of Southern California. These layers of ash, or tephra, are sandwiched among sediments that contain a remarkably detailed record of ocean and climate change. Together, both the ash and sediments reveal that the last eruption was not a single event, but two closely spaced eruptions that tapped the brakes on a natural global-warming trend that eventually led the planet out of a major ice age.

"We discovered here that there are two ash-forming super-eruptions 170 years apart and each cooled the ocean by about 3 degrees Celsius," said U.C. Santa Barbara geologist Jim Kennett, who will be presenting a poster about the work on Wednesday, 25 Oct., at the annual meeting of the Geological Society of America in Seattle. Attaining the resolution to detect the separate eruptions and their climate effects is due to several special conditions found in the Santa Barbara Basin, Kennett said.

One condition is the steady supply of sediment to the basin from land -- about one millimeter per year. Then there is the highly productive ocean in the area, fed by upwelling nutrients from the deep ocean. This produced abundant tiny shells of foraminifera that sank to the seafloor where they were buried and preserved in the sediment. These shells contain temperature-dependent oxygen isotopes that reveal the sea surface temperatures in which they lived.

But none of this would be much use, said Kennett, if it not for the fact that oxygen levels at the seafloor in the basin are so low as to preclude burrowing marine animals that mix the sediments and degrade details of the climate record. As a result, Kennett and his colleagues can resolve the climate with decadal resolution.

By comparing the volcanic ash record with the foraminifera climate record, it's quite clear, he said, that both of these eruptions caused separate volcanic winters -- which is when ash and volcanic sulfur dioxide emissions reduce that amount of sunlight reaching Earth's surface and cause temporary cooling. These cooling events occurred at an especially sensitive time when the global climate was warming out of an ice age and easily disrupted by such events.

Kennett and colleagues discovered that the onset of the global cooling events was abrupt and coincided precisely with the timing of the supervolcanic eruptions, the first such observation of its kind.

But each time, the cooling lasted longer than it should have, according to simple climate models, he said. "We see planetary cooling of sufficient magnitude and duration that there had to be other feedbacks involved." These feedbacks might include increased sunlight-reflecting sea ice and snow cover or a change in ocean circulation that would cool the planet for a longer time.

"It was a fickle, but fortunate time," Kennett said of the timing of the eruptions. "If these eruptions had happened during another climate state we may not have detected the climatic consequences because the cooling episodes would not have lasted so long."


You will find the latest information about us on this page. Our company is constantly evolving and growing. We provide wide range of science stories. Our mission is to provide best solution that helps everyone.

i just started this website on 26/2/2018

more upcoming interesting posts



Special 'nugget-producing' bacteria may hold the key to more efficient processing of gold ore, mine tailings and recycled electronics, as well as aid in exploration for new deposits, University of Adelaide research has shown.


For more than 10 years, University of Adelaide researchers have been investigating the role of microorganisms in gold transformation. In the Earth's surface, gold can be dissolved, dispersed and reconcentrated into nuggets. This epic 'journey' is called the biogeochemical cycle of gold.

Now they have shown for the first time, just how long this biogeochemical cycle takes and they hope to make to it even faster in the future.

"Primary gold is produced under high pressures and temperatures deep below the Earth's surface and is mined, nowadays, from very large primary deposits, such as at the Superpit in Kalgoorlie," says Dr Frank Reith, Australian Research Council Future Fellow in the University of Adelaide's School of Biological Sciences, and Visiting Fellow at CSIRO Land and Water at Waite.

"In the natural environment, primary gold makes its way into soils, sediments and waterways through biogeochemical weathering and eventually ends up in the ocean. On the way bacteria can dissolve and re-concentrate gold -- this process removes most of the silver and forms gold nuggets.

"We've known that this process takes place, but for the first time we've been able to show that this transformation takes place in just years to decades -- that's a blink of an eye in terms of geological time.

"These results have surprised us, and lead the way for many interesting applications such as optimising the processes for gold extraction from ore and re-processing old tailings or recycled electronics, which isn't currently economically viable."

Working with John and Johno Parsons (Prophet Gold Mine, Queensland), Professor Gordon Southam (University of Queensland) and Dr Geert Cornelis (formerly of the CSIRO), Dr Reith and postdoctoral researcher Dr Jeremiah Shuster analysed numerous gold grains collected from West Coast Creek using high-resolution electron-microscopy.

Published in the journal Chemical Geology, they showed that five 'episodes' of gold biogeochemical cycling had occurred on each gold grain. Each episode was estimated to take between 3.5 and 11.7 years -- a total of under 18 to almost 60 years to form the secondary gold.

"Understanding this gold biogeochemical cycle could help mineral exploration by finding undiscovered gold deposits or developing innovative processing techniques," says Dr Shuster, University of Adelaide. "If we can make this process faster, then the potential for re-processing tailings and improving ore-processing would be game-changing. Initial attempts to speed up these reactions are looking promising."

The researchers say that this new understanding of the gold biogeochemical process and transformation may also help verify the authenticity of archaeological gold artefacts and distinguish them from fraudulent copies.


New research from University of Alberta and University of Vienna microbiologists provides unparalleled insight into the Earth's nitrogen cycle, identifying and characterizing the ammonia-oxidizing microbe, Nitrospira inopinata. The findings, explained Lisa Stein, co-author and professor of biology, have significant implications for climate change research.


"I consider nitrogen the camouflaged beast in our midst," said Stein.

"Humans are now responsible for adding more fixed nitrogen, in the form of ammonium, to the environment than all natural sources combined. Because of that, the nitrogen cycle has been identified as the most unbalanced biogeochemical cycle on the planet."

The camouflaged beast

Earth's nitrogen cycle has been thrown significantly off balance by the process we use to make fertilizer, known as the Haber-Bosch process, which adds massive quantities of fixed nitrogen, or ammonium, to the environment. Downstream effects of excess ammonium has huge environmental implications, from dead zones in our oceans to a greenhouse gas effect 300 times that of carbon dioxide on a molecule to molecule basis.

Isolation and characterization of the Nitrospira inopinata microbe, Stein said, could hold the answers for Earth's nitrogen problem.

Practical applications

"The Nitrospira inopinata microbe is an ammonium sponge, outcompeting nearly all other bacteria and archaea in its oxidation of ammonium in the environment," explained Stein. "Now that we know how efficient this microbe is, we can explore many practical applications to reduce the amount of ammonium that contributes to environmental problems in our atmosphere, water, and soil."

The applications range from wastewater treatment, with the development of more efficient biofilms, to drinking water and soil purification to climate change research.

"An efficient complete ammonia oxidizer, such as Nitrospira inopinata, may produce less nitrous oxide," explained Kits. "By encouraging our microbe to outgrow other, incomplete oxidizers, we may, in turn, reduce their contribution to the greenhouse gas effect. Further investigation is required."



What do the Gulf of Mexico's "dead zone," global climate change, and acid rain have in common? They're all a result of human impacts to Earth's biology, chemistry and geology, and the natural cycles that involve all three.


On August 4-5, 2009, scientists who study such cycles--biogeochemists--will convene at a special series of sessions at the Ecological Society of America (ESA)'s 94th annual meeting in Albuquerque, N.M.

They will present results of research supported through various National Science Foundation (NSF) efforts, including coupled biogeochemical cycles (CBC) funding. CBC is an emerging scientific discipline that looks at how Earth's biogeochemical cycles interact.

"Advancing our understanding of Earth's systems increasingly depends on collaborations between bioscientists and geoscientists," said James Collins, NSF assistant director for biological sciences. "The interdisciplinary science of biogeochemistry is a way of connecting processes happening in local ecosystems with phenomena occurring on a global scale, like climate change."

A biogeochemical cycle is a pathway by which a chemical element, such as carbon, or compound, like water, moves through Earth's biosphere, atmosphere, hydrosphere and lithosphere.

In effect, the element is "recycled," although in some cycles the element is accumulated or held for long periods of time.

Chemical compounds are passed from one organism to another, and from one part of the biosphere to another, through biogeochemical cycles.

Water, for example, can go through three phases (liquid, solid, gas) as it cycles through the Earth system. It evaporates from plants as well as land and ocean surfaces into the atmosphere and, after condensing in clouds, returns to Earth as rain and snow.

Researchers are discovering that biogeochemical cycles--whether the water cycle, the nitrogen cycle, the carbon cycle, or others--happen in concert with one another. Biogeochemical cycles are "coupled" to each other and to Earth's physical features.

"Historically, biogeochemists have focused on specific cycles, such as the carbon cycle or the nitrogen cycle," said Tim Killeen, NSF assistant director for geosciences. "Biogeochemical cycles don't exist in isolation, however. There is no nitrogen cycle without a carbon cycle, a hydrogen cycle, an oxygen cycle, and even cycles of trace metals such as iron."

Now, with global warming and other planet-wide impacts, biogeochemical cycles are being drastically altered. Like broken gears in machinery that was once finely-tuned, these cycles are falling out of sync.

Knowledge about coupled biogeochemical cycles is "essential to addressing a range of human impacts," said Jon Cole, a biogeochemist at the Cary Institute of Ecosystem Studies in Millbrook, N.Y., and co-organizer of the CBC symposium at ESA.

"It will shed light on questions such as the success of wetland restoration and the status of aquatic food webs. The special CBC conference sessions at ESA will explore future research needs in environmental chemistry, with a focus on how global climate change may impact various habitats."

Earth's habitats have different chemical compositions. Oceans are wet and salty; forest soils are rich in organic forms of nitrogen and carbon that retain moisture.

The atmosphere has a fairly constant chemical composition--roughly 79 percent nitrogen, 20 percent oxygen, and a 1 percent mix of other gases like water, carbon dioxide, and methane.

"Seemingly subtle chemical changes may have large effects," said Cole.

"Consider that global climate change is caused by increases in carbon dioxide and methane, gases which occupy less than ½ of one percent of the atmosphere. Now more than ever, we need a comprehensive view of Earth's biogeochemical cycles."

The study of coupled biogeochemical cycles has direct management applications.

The "dead zone" in the Gulf of Mexico is one example. Nitrogen-based fertilizers make their way from Iowa cornfields to the Mississippi River, where they are transported to the Gulf of Mexico. Once deposited in the Gulf, nitrogen stimulates algal blooms.

When the algae die, their decomposition consumes oxygen, creating an area of water roughly the size of New Jersey that is inhospitable to aquatic life. Protecting the Gulf's fisheries--with an estimated annual value of half-a-billion dollars--relies on understanding how coupled biogeochemical cycles interact.

A better understanding of the relationship between nitrogen and oxygen cycles may help determine how best to use nitrogen fertilizers, for example, to avoid dead zones.



A study published today in Science by researchers from the U.S. Department of Energy's Argonne National Laboratory may dramatically shift our understanding of the complex dance of microbes and minerals that takes place in aquifers deep underground. This dance affects groundwater quality, the fate of contaminants in the ground and the emerging science of carbon sequestration.


Deep underground, microbes don't have much access to oxygen. So they have evolved ways to breathe other elements, including solid minerals like iron and sulfur.

The part that interests scientists is that when the microbes breathe solid iron and sulfur, they transform them into highly reactive dissolved ions that are then much more likely to interact with other minerals and dissolved materials in the aquifer. This process can slowly but steadily make dramatic changes to the makeup of the rock, soil and water.

"That means that how these microbes breathe affects what happens to pollutants -- whether they travel or stay put -- as well as groundwater quality," said Ted Flynn, a scientist from Argonne and the Computation Institute at the University of Chicago and the lead author of the study.

About a fifth of the world's population relies on groundwater from aquifers for their drinking water supply, and many more depend on the crops watered by aquifers.

For decades, scientists thought that when iron was present in these types of deep aquifers, microbes who can breathe it would out-compete those who cannot. There's an accepted hierarchy of what microbes prefer to breathe, according to how much energy each reaction can theoretically yield. (Oxygen is considered the best overall, but it is rarely found deep below the surface.)

According to these calculations, of the elements that do show up in these aquifers, breathing iron theoretically provides the most energy to microbes. And iron is frequently among the most abundant minerals in many aquifers, while solid sulfur is almost always absent.

But something didn't add up right. A lot of the microorganisms had equipment to breathe both iron and sulfur. This requires two completely different enzymatic mechanisms, and it's evolutionarily expensive for microbes to keep the genes necessary to carry out both processes. Why would they bother, if sulfur was so rarely involved?

The team decided to redo the energy calculations assuming an alkaline environment -- "Older and deeper aquifers tend to be more alkaline than pH-neutral surface waters," said Argonne coauthor Ken Kemner -- and found that in alkaline environments, it gets harder and harder to get energy out of iron.

"Breathing sulfur, on the other hand, becomes even more favorable in alkaline conditions," Flynn said.

The team reinforced this hypothesis in the lab with bacteria under simulated aquifer conditions. The bacteria, Shewanella oneidensis, can normally breathe both iron and sulfur. When the pH got as high as 9, however, it could breathe sulfur, but not iron.

There was still the question of where microorganisms like Shewanella could find sulfur in their native habitat, where it appeared to be scarce.

The answer came from another group of microorganisms that breathe a different, soluble form of sulfur called sulfate, which is commonly found in groundwater alongside iron minerals. These microbes exhale sulfide, which reacts with iron minerals to form solid sulfur and reactive iron. The team believes this sulfur is used up almost immediately by Shewanella and its relatives.

"This explains why we don't see much sulfur at any fixed point in time, but the amount of energy cycling through it could be huge," Kemner said.

Indeed, when the team put iron-breathing bacteria in a highly alkaline lab environment without any sulfur, the bacteria did not produce any reduced iron.

"This hypothesis runs counter to the prevailing theory, in which microorganisms compete, survival-of-the-fittest style, and one type of organism comes out dominant," Flynn said. Rather, the iron-breathing and the sulfate-breathing microbes depend on each other to survive.

Understanding this complex interplay is particularly important for sequestering carbon. The idea is that in order to keep harmful carbon dioxide out of the atmosphere, we would compress and inject it into deep underground aquifers. In theory, the carbon would react with iron and other compounds, locking it into solid minerals that wouldn't seep to the surface.

Iron is one of the major players in this scenario, and it must be in its reactive state for carbon to interact with it to form a solid mineral. Microorganisms are essential in making all that reactive iron. Therefore, understanding that sulfur -- and the microbe junkies who depend on it -- plays a role in this process is a significant chunk of the puzzle that has been missing until now.



Many different types of animals come together to form vast groups -- insect swarms, mammal herds, or bird flocks, for example. Researchers in France added another example to the list, reported October 5 in the online journal PLoS ONE: the huge Wels catfish, the world's third largest and Europe's largest fresh-water fish. Researchers observed these fish in the Rhone River from May 2009 to Feb. 2011 and found that they formed dense groups of 15 to 44 individuals, corresponding to an estimated total biomass of up to 1132 kilograms with a biomass density of 14 to 40 kilograms per square meter.


Unlike traditional behavior seen in schools of fish, the catfish in the aggregations did not all point in the same direction and sometimes came into contact with their neighbors. Researchers were not able to determine the reason for this behavior, though they ruled out reproduction, foraging, and safety from predators.

The species originates from Eastern Europe and is not native to the Rhone, so the researchers were curious what effect these large aggregations may have on the local ecosystem. They calculate that the groups of fish could excrete extremely large amount of phosphorus and nitrogen in their waste, creating potentially the highest biogeochemical hotspots reported in freshwater ecosystems.

According to the authors, "our study is unique in identifying unexpected ecological impacts of alien species. Our findings will be ground breaking news for many scientific fields including conservation biology, ecosystem ecology and behavioral ecology and anyone interested in biological invasion and the potential ecological impacts of alien species. Therefore, we believe that our manuscript will stimulate further research and discussion in these fields."



Cortisol, deemed the quintessential stress hormone, allows us to cope with important events and imminent threats. A spike in cortisol levels mobilizes necessary resources -- such as by tapping into our body's reserves to produce energy -- and then allows us to return to a stable state. But can our bodies cope with prolonged or repeated stress in the same way? Some studies report lower cortisol levels in humans -- or other mammals -- subject to chronic stress, while other studies contradict these findings. In light of this, is cortisol still a reliable stress indicator?


To answer this question, researchers from Rennes, France, studied 59 adult horses (44 geldings and 15 mares) from three different riding centers, under their usual living conditions: Horses were kept in individual stalls that are both spatially and socially restrictive and ridden by inexperienced equestrians -- both potential stressors that, if recurrent, can lead to chronically compromised welfare. The scientists monitored various behavioral and sanitary indicators of the horses' welfare and measured cortisol levels using blood and stool samples. The equine subjects had all been living under the stated conditions for at least a year at the start of the study, and they were observed for several weeks.

Surprisingly, cortisol levels in horses showing signs of compromised welfare (e.g., ears pointed back, back problems, and anemia) were lower than in other horses. These findings are in accord with early observations by the ethology team, which recorded abnormally low cortisol concentrations in horses with depressive-like behavior. Furthermore, cortisol metabolite levels measured in feces correlated with blood cortisol levels, which advocates use of stool sample analyses as an alternative, noninvasive means of gauging horse welfare.

Low cortisol levels may seem counterintuitive here, but they could be explained by a breakdown of the system when horses experience stress at excessive levels for excessive lengths of time. So when exactly does duration and intensity of stress become excessive for these horses? This is one of the questions the team of researchers is now seeking to answer. At any rate, this study demonstrates that cortisol levels are not always reliable indicators of stress or compromised welfare: On the one hand, high cortisol may be a sign of positive stress, driving higher performance; on the other, low cortisol does not necessarily mean lack of stress. Quite the contrary, under a certain threshold, low cortisol levels may be cause for concern.



The wellbeing of zoological animals is set to improve following the successful trial of a new welfare assessment grid, a new study in the journal Veterinary Record reports.


Researchers from Marwell Zoo, the Wildfowl and Wetlands Trust and the School of Veterinary Medicine at the University of Surrey, trialled a series of monitoring strategies on primates and birds to help zookeepers ensure the health and safety of animals in their care. The introduction of the practice over a period of 13 weeks at two zoological collections in the South of England, clearly demonstrated the level of physical and psychological wellbeing of the animals, and the effect of certain interventions.

The welfare assessment grid requires daily monitoring of a range of factors, such as the animals' physical condition, their psychological wellbeing and the quality of the environment, as well as the daily procedures they experience. These factors were not all previously part of the regular health checks that zookeepers were required to assess when they were undertaking animal welfare audits. In each area the primates and birds were scored, helping to monitor their progress and highlight any potential problems.

Although welfare protection of zoo animals is enshrined in both European and domestic legislation, monitoring it comprehensively in zoos has proven difficult due to the absence of clear and consistent guidance.

Sarah Wolfensohn, Professor of Animal Welfare at the University of Surrey, said: "Ensuring a high standard of animal welfare is paramount for any zoo, but it has not always been possible. This innovative system will give zookeepers clear guidance on what they should be looking out for in terms of physical and psychological characteristics in animals, which will help monitor their overall wellbeing.

"Zoos are a key part of educating us all about our environment and the animals we share it with across the world, and we all want to know that the animals we do see in zoos are being given the best possible care for their welfare."



A recently published study in the journal Pachyderm highlights the ongoing effort of accredited zoos to address challenges and improve the sustainability of endangered species populations in their care. The study, co-authored by scientists from San Diego Zoo Global and Mars Hill University, evaluated fertility issues in captive-born southern white rhinos and determined that diets including soy and alfalfa were likely contributors to breeding challenges.


"The captive southern white rhinoceros (SWR) population is not currently self-sustaining, due to the reproductive failure of captive-born females," said Christopher Tubbs Ph.D, San Diego Zoo Global and lead author of the paper. "Our research into this phenomenon points to chemicals produced by plants present in captive diets, such as soy and alfalfa, as likely causes."

Soy and alfalfa are commonly included in feeds for many herbivorous animals under human care, however these diets have high levels of phytoestrogens that disrupt normal hormone functions in some species. The study reviews historical data on the reproductive success of southern white rhinos in zoos in North America. These studies discovered that female rhinos born in captive environments showed lower reproductive levels. At the San Diego Zoo Safari Park, animal care staff switched to a low phytoestrogen diet for southern white rhinos in their care in 2014. The nutritional change appears to be an effective means of addressing the challenge.

"Following our diet modification, routine monitoring of the reproductive status of our female SWR suggested that the diet change was having a positive impact," said Tubbs. "Two females that had previously not reproduced have now become pregnant and successfully given birth to healthy calves."







Reptiles are often chosen as pets when an allergy risk exists within a family and the choice is made to avoid potentially allergenic pets such as dogs, cats or guinea pigs. Researchers at the Messerli Research Institute, however, recently described a noteworthy clinical case in which an eight-year-old boy developed nightly attacks of severe shortness of breath four months after the purchase of a bearded dragon.


The cause for the allergic reaction turned out not to be the lizard itself but the animal's food. The grasshoppers used to regularly feed the lizard were revealed to be the source of the allergy.

First author Erika Jensen-Jarolim speaks of the tip of an iceberg: "Even colleagues with allergologic expertise could overlook insects as reptile food as a possible cause of such allergic reactions. Far too little is known about grasshoppers as a potential allergenic source in homes. We do know of cases, however, in which fish food has caused allergies. And insects are often processed in fish food."

Grasshopper enzymes identified as allergens

For a long time, the cause of the allergic reaction in the eight-year-old Viennese boy remained unknown. The initial diagnosis was pseudo croup, an infection of the respiratory tract, and severe asthma. Allergy expert Jensen-Jarolim and her team considered the possibility of a pet allergy and chose to also test the reptile food: grasshoppers. An allergy skin test and evidence of specific IgE antibodies finally brought certainty: grasshopper allergens were the cause of the allergic reactions in the child.

"We were in the middle of a study investigating sources of allergies at pet stores. So coming upon the reptile food was pure coincidence," says Jensen-Jarolim.

Allergy persists long after exposure

On Jensen-Jarolim's advice, the reptile was immediately removed from the boy's home. The symptoms abated as a result. Four years later, however, the boy exposed himself to the allergen again, which triggered an allergic asthmatic reaction even after all that time.

New rules for handling reptiles

"We are seeing a shift in the attitude towards reptiles from a pure hobby or biological interest toward a human-animal relationship with an emotional component. It is difficult to estimate the number of reptiles and food animals living in people's homes and the undisclosed figure is sure to be high," Jensen-Jarolim believes. She recommends keeping reptile food outside of homes. The reptiles themselves should not be kept in living rooms, as undigested insects end up in the terraria via the faeces. This could result in pet owners inhaling the aggressive allergens, leading to allergies such as asthma or skin inflammations.

"Grasshopper allergies have been nearly unknown to date. With our publication, it is our intention to sensitise the public to this matter. We are especially concerned about people who keep such animals, pet store employees as well as physicians, who should include questions regarding reptile pets and their food as a routine in their allergy diagnostic consultation," stresses Jensen-Jarolim.



Experts are warning cat owners to be aware of the risks associated with feeding their pets raw meat-based diets (RMBDs), instead of the more conventional dry or canned pet foods.

In the Vet Record today, a team of researchers based in The Netherlands say these diets may be contaminated with bacteria and parasites, and as such may pose a risk to both animal and human health.

Feeding RMBDs to companion animals has become increasingly popular across the world, yet claims of health benefits are not backed by evidence, and several studies have reported possible risks.

Of most concern, however, is the risk to public or animal health due to contamination of RMBDs with zoonotic bacteria and parasites, that can pass between animals and humans.

So a team led by Paul Overgaauw at Utrecht University set out to determine the presence of four zoonotic bacteria and two parasite species in commercial RMBDs, available in most pet shops and supermarkets.

They analysed 35 commercial frozen RMBDs from eight different brands, widely available in The Netherlands. Escherichia coli O157 was isolated from eight products (23%), Listeria species were present in 15 products (43%) and Salmonella species in seven products (20%). Both E coli O157 and Salmonella infections in humans have been linked with serious illnesses.

Four products (11%) contained the parasite Sarcocystis cruzi and another four contained Sarcocystis tenella. In two products (6%) Toxoplasma gondii was found. The Sarcocystes species are not zoonotic but pose a risk to farm animals. T gondii is an important zoonosis with a high disease burden in humans.

"Despite the relatively low sample size of frozen products in our study, it is clear that commercial RMBDs may be contaminated with a variety of zoonotic bacterial and parasitic pathogens that may be a possible source of bacterial infections in pet animals and if transmitted pose a risk for human beings," say the researchers.

"Cats and dogs that eat raw meat diets are also more likely to become infected with antibiotic-resistant bacteria than animals on conventional diets, which could pose a serious risk to both animal health and public health," they add.

They outline several ways in which pet owners and other household members can encounter such pathogens. For example, through direct contact with the food or with an infected pet; through contact with contaminated household surfaces; or by eating cross-contaminated human food.

They therefore suggest that pet owners should be informed about the risks associated with feeding their animals RMBDs, and should be educated about personal hygiene and proper handling of RMBDs.

Warnings and handling instructions should also be included on product labels and/or packages, they advise.



Recent incidents of adulteration involving infant formula, other milk products and pet food with the industrial chemical melamine revealed the weaknesses of current methods widely used across the domestic and global food industry for determining protein content in foods. The possible utility of alternative existing and emerging methods is the subject of a new paper published in Comprehensive Reviews in Food Science and Food Safety, a peer-reviewed journal of the Institute of Food Technologists (IFT).


The paper, now available online, is authored by a team of experts led by Jeffrey Moore, Ph.D., of the U.S. Pharmacopeial Convention (USP). USP publishes the Food Chemicals Codex (FCC), a compendium of quality standards for food ingredients.

The paper examines how reliance on 19th century methods -- primarily the Kjeldahl method and the combustion (Dumas) method -- for measuring total protein content in foods and the lack of more specific methods allowed for the adulteration of protein-based foods with melamine and related nonprotein compounds in 2007 and 2008. Rather than quantifying protein content, these methods rely on total nitrogen determination as a marker to estimate the amount of protein in a food -- and are the current standard for the food industry. Such approaches may allow unscrupulous parties to fool these tests simply by adding a cheap organic compound containing nitrogen, which can result in severe physical damage to humans and animals as well as financial consequences for food producers and consumers through price increases, market disruptions, trade restrictions, product liability costs, loss of revenues and brand damages.

"While the globalization of the food industry has provided consumers with a seemingly endless number of choices and year-round availability to enhance their diets, the events of 2007 and 2008 have shown that it may also introduce new risks -- leaving the industry as a whole and individual consumers vulnerable to potential serious consequences," said Dr. Moore. "Adulteration of foods represents a significant public health threat that needs to be addressed. In this paper, we look at a path forward on the complex issue of protein measurement -- development, validation and implementation of new analysis methods specific for protein-based food ingredients."

As described in the paper, protein content is held at a premium because of the nutritional value of proteins as well as their contribution to functional properties of food such as texture and flavor. Thus, protein quantification is an important tool used throughout the global food supply chain, helping to determine the economic value of a food. The authors note that as long as the value of food ingredients is based on protein content, the incentive to adulterate these materials by measures designed to inflate protein measurement will exist -- necessitating the need for new approaches used by the food industry.

To stimulate discussion and to provide new information about the development and adoption of new or alternative protein methodologies, the authors of the paper review the following:

the early history of food protein methodology

analytical strategies to prevent intentional adulteration of foods and food ingredients

challenges of developing or adopting new or alternative protein quantification methods and associated reference materials

criteria against which new methodologies can be evaluated, and

emerging methodologies for total food protein measurement, including pros and cons.

The paper looks at the two primary analytical strategies to prevent "economic adulteration of food," which is defined as "fraudulent addition of non-authentic substances or removal or replacement of authentic substances without the purchaser's knowledge for economic gain of the seller." The first approach uses analytical tests to identify one or more suspected adulterants, where an "absence of" result indicates the test material is not adulterated with a specific material. This requires prior knowledge about the adulterant and therefore is not useful for detecting unknown adulterants, thus prohibiting it from preventing future adulteration with unknown substitutes. The second approach is based on compendial identification and purity tests that substantiate an ingredient's identify and quantify its purity, i.e., a "presence of" result. This approach is effective when either a known or unknown adulterant is substituted for the original material at concentrations high enough to be recognized in test results. As noted in the paper, it is less useful when adulterants are present in low concentrations; however, from a practical perspective, counterfeiters often must adulterate at relatively high concentration levels to realize economic gain. Such purity standards are contained in compendia including the FCC. At this time, no current compendial methods are sufficiently selective to differentiate protein from other nitrogen-containing compounds.

The paper identifies a host of existing methods for food protein measurement that may exhibit potential for broader use (and the associated pros and cons of each method), including infrared methods, amino acid-based methods and new spectral probes. Certain methods have existed for some time but have not achieved routine use by the food industry, instead having been largely limited to research applications. However, the authors note that even though these methods may have some utility, many food matrices have unique requirements that necessitate different approaches for protein measurement. This may require a combination of different protein analysis methods to effectively prevent adulteration. The paper also looks at emerging methods including antibody based methods and high performance liquid chromatography (HPLC) that may be useful once sufficiently developed for practical use in routine protein measurements.

"Through further exploration of available and emerging methods -- and new work in this area -- the ultimate hope is to protect public health by preventing the next melamine," noted Dr. Moore.



It has become common for people who have pets to refer to themselves as "pet parents," but how closely does the relationship between people and their non-human companions mirror the parent-child relationship? A small study from a group of Massachusetts General Hospital (MGH) researchers makes a contribution to answering this complex question by investigating differences in how important brain structures are activated when women view images of their children and of their own dogs. Their report is being published in the open-access journal PLOS ONE.


"Pets hold a special place in many people's hearts and lives, and there is compelling evidence from clinical and laboratory studies that interacting with pets can be beneficial to the physical, social and emotional wellbeing of humans," says Lori Palley, DVM, of the MGH Center for Comparative Medicine, co-lead author of the report. "Several previous studies have found that levels of neurohormones like oxytocin -- which is involved in pair-bonding and maternal attachment -- rise after interaction with pets, and new brain imaging technologies are helping us begin to understand the neurobiological basis of the relationship, which is exciting."

In order to compare patterns of brain activation involved with the human-pet bond with those elicited by the maternal-child bond, the study enrolled a group of women with at least one child aged 2 to 10 years old and one pet dog that had been in the household for two years or longer. Participation consisted of two sessions, the first being a home visit during which participants completed several questionnaires, including ones regarding their relationships with both their child and pet dog. The participants' dog and child were also photographed in each participants' home.

The second session took place at the Athinoula A. Martinos Center for Biomedical Imaging at MGH, where functional magnetic resonance imaging (fMRI) -- which indicates levels of activation in specific brain structures by detecting changes in blood flow and oxygen levels -- was performed as participants lay in a scanner and viewed a series of photographs. The photos included images of each participant's own child and own dog alternating with those of an unfamiliar child and dog belonging to another study participant. After the scanning session, each participant completed additional assessments, including an image recognition test to confirm she had paid close attention to photos presented during scanning, and rated several images from each category shown during the session on factors relating to pleasantness and excitement.

Of 16 women originally enrolled, complete information and MR data was available for 14 participants. The imaging studies revealed both similarities and differences in the way important brain regions reacted to images of a woman's own child and own dog. Areas previously reported as important for functions such as emotion, reward, affiliation, visual processing and social interaction all showed increased activity when participants viewed either their own child or their own dog. A region known to be important to bond formation -- the substantia nigra/ventral tegmental area (SNi/VTA) -- was activated only in response to images of a participant's own child. The fusiform gyrus, which is involved in facial recognition and other visual processing functions, actually showed greater response to own-dog images than own-child images.

"Although this is a small study that may not apply to other individuals, the results suggest there is a common brain network important for pair-bond formation and maintenance that is activated when mothers viewed images of either their child or their dog," says Luke Stoeckel, PhD, MGH Department of Psychiatry, co-lead author of the PLOS One report. "We also observed differences in activation of some regions that may reflect variance in the evolutionary course and function of these relationships. For example, like the SNi/VTA, the nucleus accumbens has been reported to have an important role in pair-bonding in both human and animal studies. But that region showed greater deactivation when mothers viewed their own-dog images instead of greater activation in response to own-child images, as one might expect. We think the greater response of the fusiform gyrus to images of participants' dogs may reflect the increased reliance on visual than verbal cues in human-animal communications."

Co-author Randy Gollub, MD, PhD, of MGH Psychiatry adds, "Since fMRI is an indirect measure of neural activity and can only correlate brain activity with an individual's experience, it will be interesting to see if future studies can directly test whether these patterns of brain activity are explained by the specific cognitive and emotional functions involved in human-animal relationships. Further, the similarities and differences in brain activity revealed by functional neuroimaging may help to generate hypotheses that eventually provide an explanation for the complexities underlying human-animal relationships."

The investigators note that further research is needed to replicate these findings in a larger sample and to see if they are seen in other populations -- such as women without children, fathers and parents of adopted children -- and in relationships with other animal species. Combining fMRI studies with additional behavioral and physiological measures could obtain evidence to support a direct relationship between the observed brain activity and the purported functions.

Stoeckel is a clinical neuropsychologist and lecturer on psychology, and Gollub an associate professor of Psychiatry at Harvard Medical School. Additional co-authors of the PLOS ONE report are Eden Evins, MD, MGH Psychiatry, and Steven Niemi, DVM, Harvard University. Support for the study includes National Institutes of Health grants K23DA032612 and K24DA030443 and support from the Charles A. King Trust. The study was facilitated by imaging consult support from Harvard Catalyst.



A novel PET radiotracer developed at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital (MGH) is able for the first time to reveal epigenetic activity -- the process that determines whether or not genes are expressed -- within the human brain. In their report published in Science Translational Medicine, a team of MGH/Martinos Center investigators reports how their radiochemical -- called Martinostat -- shows the expression levels of important epigenetics-regulating enzymes in the brains of healthy volunteers.


"The ability to image the epigenetic machinery in the human brain can provide a way to begin understanding interactions between genes and the environment," says Jacob Hooker, PhD, of the Martinos Center, senior author of the report. "This could allow us to investigate questions such as why some people genetically predisposed to a disease are protected from it? Why events during early life and adolescence have such a lasting impact on brain health? Is it possible to 'reset' gene expression in the human brain?"

A key epigenetic mechanism is the packaging of DNA into chromosomes, in which it wraps around proteins called histones forming a structure called chromatin. Modification of histones by the addition or removal of molecules called epigenetic factors can regulate whether or not an adjacent gene is expressed. One of the most important of these factors is the acetyl molecule, addition of which allows a gene to be transcribed and removal of which -- called deacetylation -- prevents transcription.

Enzymes called histone deacetylases (HDAC) are important regulators of gene transcription, and one group of HDACs has been linked to important brain disorders. Several established neuropsychiatric drugs are HDAC inhibitors, and others are currently being studied as potential treatment for Alzheimer's disease and Huntington's disease. Martinostat was developed in Hooker's laboratory and is patterned after known HDAC inhibitors in order to tightly bind to HDAC molecules in the brain.

PET scans with Martinostat of the brains of eight healthy human volunteers revealed characteristic patterns of uptake -- reflecting HDAC expression levels -- that were consistent among all participants. HDAC expression was almost twice as high in gray matter as in white matter; and within gray matter structures, uptake was highest in the hippocampus and amygdala and lowest in the putamen and cerebellum. Experiments with brain tissues from humans and baboons confirmed Martinostat's binding to HDAC, and studies with neural progenitor stem cells revealed specific genes regulated by this group of HDACs, many of which are known to be important in brain health and disease.

"HDAC dysregulation has been implicated in a growing number of brain diseases, so being able to study HDAC regulation both in the normal brain and through the progression of disease should help us better understand disease processes," says Hooker, who is an associate professor of Radiology at Harvard Medical School. "We've now started studies of patients with several neurologic or psychiatric disorders, and I believe Martinostat will help us understand the different ways these conditions are manifested and provide new insights into potential therapies."



Contrary to popular belief, having a cat in the home does not improve the mental or physical health of children, according to a new RAND Corporation study.

The findings are from the largest-ever study to explore the notion that pets can improve children's health by increasing physical activity and improving young people's empathy skills.

Unlike earlier smaller studies on the topic, the RAND work used advanced statistical tools to control for multiple factors that could contribute to a child's wellbeing other than pet ownership, such as belonging to a family that has higher income or living in a more affluent setting. The results are published online by the journal Anthrozoos.

"We could not find evidence that children from families with dogs or cats are better off either in terms of their mental wellbeing or their physical health," said Layla Parast, a co-author of the study and a statistician at RAND, a nonprofit research organization. "Everyone on the research team was surprised -- we all have or grew up with dogs and cats. We had essentially assumed from our own personal experiences that there was a connection."

The study analyzed information from more than 2,200 children who lived in pet-owning households in California and compared them to about 3,000 households without a dog or cat. The information was collected as a part of the 2003 California Health Interview Survey, an annual survey that for one year also asked participants about whether they had pets, along with an array of other health questions.

Researchers did find that children from pet-owning families tended to have better general health, have slightly higher weight and were more likely to be physically active compared to children whose families did not have pets. In addition, children who had pets were more likely to have ADD/ADHD, were more likely to be obedient and were less likely to have parents concerned about their child's feelings, mood, behavior and learning ability.

But when researchers adjusted the findings to account for other variables that might be associated with both the likelihood that a family has a pet and the child's health, the association between pet ownership and better health disappeared. Overall, researchers considered more than 100 variables in adjusting their model of pet ownership and health, including family income, language skills and type of family housing.

While many previous studies have suggested a link between pet ownership and better emotional and physical health, RAND researchers say their analysis has more credibility because it analyzed a larger sample than previous efforts.

Researchers say future research could examine associations involving pet ownership over longer periods of time and in more experimental settings.

The ultimate test of the pet-health hypothesis would require a randomized trial where some people are given pets and other are not, with the groups being followed for 10 to 15 years to see if there are differences in their health outcomes.

"Such a study would likely be too costly and/or infeasible to implement, and I'm afraid it's not likely to be funded by anybody," Parast said.



Since the isolation of morphine from opium in the 19th century, scientists have hoped to find a potent opioid analgesic that isn't addictive and doesn't cause respiratory arrest with increased doses.


Now scientists at Wake Forest Baptist Medical Center report that in an animal model a novel pain-killing compound, BU08028, is not addictive and does not have adverse respiratory side effects like other opioids. The research findings are published in the Aug. 29 online edition of the Proceedings of the National Academy of Sciences.

"Based on our research, this compound has almost zero abuse potential and provides safe and effective pain relief," said Mei-Chuan Ko, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist and lead author of the study. "This is a breakthrough for opioid medicinal chemistry that we hope in the future will translate into new and safer, non-addictive pain medications."

Pain, a symptom of numerous clinical disorders, afflicts millions of people worldwide. Despite the remarkable advances in the identification of novel targets as potential analgesics in the last decade, including nociceptin-orphanin FQ peptide (NOP) receptor, mu opioid peptide (MOP) receptor agonists remain the most widely used drugs for pain management even though they are addictive and have a high mortality rate caused by respiratory arrest, Ko said.

This study, which was conducted in 12 non-human primates, targeted a combination of classical (MOP) and non-classical (NOP) opioid receptors. The researchers examined behavioral, physiological and pharmacologic factors and demonstrated that BU08028 blocked the detection of pain without the side effects of respiratory depression, itching or adverse cardiovascular events.

In addition, the study showed pain relief lasted up to 30 hours and repeated administration did not cause physical dependence.

"To our knowledge, this is the only opioid-related analgesic with such a long duration of action in non-human primates," Ko said. "We will investigate whether other NOP/Mop receptor-related compounds have similar safety and tolerability profiles like BU08028, and initiate investigational new drug-enabling studies for one of the compounds for FDA's approval."



Scientists have long known that solar-energized particles trapped around the planet are sometimes scattered into Earth's upper atmosphere where they can contribute to beautiful auroral displays. Yet for decades, no one has known exactly what is responsible for hurling these energetic electrons on their way. Recently, two spacecraft found themselves at just the right places at the right time to witness first hand both the impulsive electron loss and its cause.


New research using data from NASA's Van Allen Probes mission and FIREBIRD II CubeSat has shown that a common plasma wave in space is likely responsible for the impulsive loss of high-energy electrons into Earth's atmosphere. Known as whistler mode chorus, these waves are created by fluctuating electric and magnetic fields. The waves have characteristic rising tones -- reminiscent of the sounds of chirping birds -- and are able to efficiently accelerate electrons. The results have been published in a paper in Geophysical Review Letters.

"Observing the detailed chain of events between chorus waves and electrons requires a conjunction between two or more satellites," said Aaron Breneman, researcher at the University of Minnesota in Minneapolis, and lead author on the paper. "There are certain things you can't learn by having only one satellite -- you need simultaneous observations at different locations."

The study combined data from FIREBIRD II, which cruises at a height of 310 miles above Earth, and from one of the two Van Allen Probes, which travel in a wide orbit high above the planet. From different vantage points, they could gain a better understanding of the chain of cause and effect of the loss of these high-energy electrons.

Far from being an empty void, the space around Earth is a jungle of invisible fields and tiny particles. It's draped with twisted magnetic field lines and swooping electrons and ions. Dictating the movements of these particles, Earth's magnetic environment traps electrons and ions in concentric belts encircling the planet. These belts, called the Van Allen Radiation Belts, keep most of the high-energy particles at bay.

Sometimes however, the particles escape, careening down into the atmosphere. Typically, there is a slow drizzle of escaping electrons, but occasionally impulsive bunches of particles, called microbursts, are scattered out of the belts.

Late on Jan. 20, 2016, the Van Allen Probes observed chorus waves from its lofty vantage point and immediately after, FIREBIRD II saw microbursts. The new results confirm that the chorus waves play an important role in controlling the loss of energetic electrons -- one extra piece of the puzzle to understand how high-energy electrons are hurled so violently from the radiation belts. This information can additionally help further improve space weather predictions.



It's been 55 years since NASA astronaut John Glenn successfully launched into space to complete three orbits aboard the Friendship 7 Mercury spacecraft, becoming the first American to orbit Earth. The evolution of spaceflight, advancements in science and technologies and the progress of public-private commercial partnerships with companies such as Space X and Blue Horizons have strengthened NASA's goals and the public's confidence to move forward in discovery and human exploration.


More people today are poised to explore space than ever before; those who do will experience the effects of microgravity on the human body. Recognizing the need for data related to those effects, MUSC neuroradiologist Donna Roberts, M.D., conducted a study titled "Effects of Spaceflight on Astronaut Brain Structure as Indicated on MRI," the results of which will be featured in the Nov. 2 issue of the New England Journal of Medicine.

"Exposure to the space environment has permanent effects on humans that we simply do not understand. What astronauts experience in space must be mitigated to produce safer space travel for the public," said Roberts.

While living and working in space can be exciting, space is a hostile environment and presents many physiological and psychological challenges for the men and women of America's space program. For example, NASA astronauts have experienced altered vision and increased pressure inside their heads during spaceflight aboard the International Space Station. These conditions can be serious problems for astronauts, particularly if they occur in low-earth orbit aboard the International Space Station or far from Earth, such as on an exploration mission to Mars.

To describe these symptoms, NASA coined the term visual impairment intracranial pressure syndrome, or VIIP syndrome for short. The cause of VIIP syndrome is thought to be related to the redistribution of body fluid toward the head during long-term microgravity exposure; however, the exact cause is unknown. Given safety concerns and the potential impact to human exploration goals, NASA has made determining the cause of VIIP syndrome and how to resolve its effects a top priority.

Roberts is an associate professor of radiology in the Department of Radiology and Radiological Sciences at MUSC. Before attending medical school at MUSC, she worked at NASA Headquarters in Washington, D.C. Working with NASA's Space Life Sciences Division in the early 1990s, she was already aware of the challenges astronauts faced during long-duration spaceflights. She was concerned about the lack of data describing the adaptation of the human brain to microgravity and proposed to NASA that magnetic resonance imaging (MRI) be used to investigate the anatomy of the brain following spaceflight.

Roberts suspected subtle anatomical changes in the brains of astronauts during spaceflight might be contributing to the development of VIIP syndrome, based on her earlier work. From 2001 to 2004, Roberts led a three-year NASA-funded bed rest study, collaborating with other life sciences researchers at the University of Texas Medical Branch in Galveston. A South Carolina native, Roberts had just completed a two-year neuroradiology fellowship at the University of California at San Francisco.

For this study, she examined the brains and muscular responses of participants who stayed in bed for 90 days, during which time, they were required to keep their heads continuously tilted in a downward position to simulate the effects of microgravity.

Using functional MRI, Roberts evaluated brain neuroplasticity, studying the brain's motor cortex before, during and after long-term bed rest. Results confirmed neuroplasticity in the brain occurred during bed rest, which correlated with functional outcomes of the subjects.

As Roberts evaluated the brain scans, she saw something unusual. She noted a "crowding" occurrence at the vertex, or top of the brain, with narrowing of the gyri and sulci, the bumps and depressions in the brain that give it its folded appearance. This crowding was worse for participants who were on longer bed rest in the study.

Roberts also saw evidence of brain shifting and a narrowing of the space between the top of the brain and the inner table of the skull. She questioned if the same thing might be happening to the astronauts during spaceflight.

In further studies, Roberts acquired brain MRI scans and related data from NASA's Lifetime Surveillance of Astronaut Health program for two groups of astronauts: 18 astronauts who had been in space for short periods of time aboard the U.S. Space Shuttle and 16 astronauts who had been in space for longer periods of time, typically three months, aboard the International Space Station. Roberts and her team then compared the brain images of the two groups of astronauts.

Roberts and study investigators evaluated the cerebrospinal fluid (CSF) spaces at the top of the brain and CSF-filled structures, called ventricles, located at the center of the brain. In addition, the team paired the preflight and postflight MRI cine clips from high-resolution 3-D imaging of 12 astronauts from long-duration flights and six astronauts from short-duration flights and looked for any displacement in brain structure.

Study results confirmed a narrowing of the brain's central sulcus, a groove in the cortex near the top of the brain that separates the parietal and frontal lobes, in 94 percent of the astronauts who participated in long-duration flights and 18.8 percent of the astronauts on short-duration flights. Cine clips also showed an upward shift of the brain and narrowing of the CSF spaces at the top of the brain among the long-duration flight astronauts but not in the short-duration flight astronauts.

Her findings concluded that significant changes in brain structure occur during long-duration space flight. More importantly, the parts of the brain that are most affected -- the frontal and parietal lobes -- control movement of the body and higher executive function. The longer an astronaut stayed in space, the worse the symptoms of VIIP syndrome would be.

Roberts compared these findings with a similar medical syndrome experienced by women called idiopathic intracranial hypertension (IIH), which affects young, overweight women who present with symptoms similar to VIIP syndrome: blurry vision and high intracranial pressure with no known cause. A common treatment for IIH is to perform a lumbar puncture, whereby CSF is drained using a needle placed in the lower back -- a procedure performed by a neuroradiologist such as Roberts. Presently, there is no protocol to perform a lumbar puncture in a microgravity environment.

To further understand the results of the study, Roberts and the team plan to compare repeated postflight imaging of the brains of astronauts to determine if the changes are permanent or if they will return to baseline following some time back on Earth. With NASA's Mars expedition mission set to launch in 2033, there's an urgency for researchers such as Roberts to collect more data about astronauts and understand the basics of human space physiology.

A journey to Mars can take three to six months, at best. In order to reduce travel time between Earth and Mars, the two planets need to be aligned favorably, which occurs approximately every two years.

During this two-year time period, crew members would remain on Mars, carrying out exploration activities. The gravity on Mars is approximately one-third that of Earth. Considering travel to and from Mars, along with the time on the surface, the Martian expedition crew would be exposed to reduced gravity for at least three years, according to Roberts. What would that do to the human body? Could a human even survive that long in a reduced gravity environment?

NASA astronaut Scott Kelly spent 340 days living and working aboard the International Space Station, and astronaut Peggy Whitson recently completed a 288-day mission in space. To date, the longest continuous time in space was 438 days, a record held by Russian cosmonaut Valery Polyakov.

"We know these long-duration flights take a big toll on the astronauts and cosmonauts; however, we don't know if the adverse effects on the body continue to progress or if they stabilize after some time in space," Roberts said. "These are the questions that we are interested in addressing, especially what happens to the human brain and brain function?"

Study co-author and Department of Radiology and Radiological Science colleague Michael Antonucci, M.D., agreed. "This study is exciting in many ways, particularly as it lies at the intersection of two fascinating frontiers of human exploration -- space and the brain."

"We have known for years that microgravity affects the body in numerous ways," he continued.

"However, this study represents the most comprehensive assessment of the impact of prolonged space travel on the brain. The changes we have seen may explain unusual symptoms experienced by returning space station astronauts and help identify key issues in the planning of longer-duration space exploration, including missions to Mars."

Roberts hopes to continue to collect long-term follow-up data on the astronauts already being studied. In addition, she is participating in a new bed rest study in Cologne, Germany, collaborating with Racheal Seidler, Ph.D., of the University of Florida and the German Space Agency. The study simulates astronauts living aboard the International Space Station, while being exposed to higher levels of carbon dioxide. Carbon dioxide scrubbers aboard the International Space Station clean and filter the air systems throughout the spacecraft, but some CO2 remains. Roberts will evaluate the blood flow to the brain, brain structure and other changes among study subjects.

With her team's hard work and dedication, Roberts hopes to establish MUSC as the go-to institution for further studies in clinical neuroimaging related to space exploration.



Arizona State University's Psyche Mission, a journey to a metal asteroid, has been selected for flight, marking the first time the school will lead a deep-space NASA mission and the first time scientists will be able to see what is believed to be a planetary core.


The mission's spacecraft is expected to launch in 2023, arriving at the asteroid in 2030, where it will spend 20 months in orbit, mapping it and studying its properties.

It will be part of NASA's Discovery Program, a series of lower-cost, highly focused robotic space missions that are exploring the solar system. The Psyche project is capped at $450 million.

"This mission, visiting the asteroid Psyche, will be the first time humans will ever be able to see a planetary core," said principal investigator Lindy Elkins-Tanton, director of ASU's School of Earth and Space Exploration (SESE). "Having the Psyche Mission selected for NASA's Discovery Program will help us gain insights into the metal interior of all rocky planets in our solar system, including Earth."

Psyche, an asteroid orbiting the sun between Mars and Jupiter, is made almost entirely of nickel-iron metal. As such, it offers a unique look into the violent collisions that created Earth and the other terrestrial planets.

The scientific goals of the Psyche mission are to understand the building blocks of planet formation and explore firsthand a wholly new and unexplored type of world. The mission team seeks to determine whether Psyche is a protoplanetary core, how old it is, whether it formed in similar ways to Earth's core, and what its surface is like.

"The knowledge this mission will create has the potential to affect our thinking about planetary science for generations to come," ASU President Michael M. Crow said. "We are in a new era of exploration of our solar system with new public-private sector partnerships helping unlock new worlds of discovery, and ASU will be at the forefront of that research."

Psyche -- a window into planetary cores

Every world explored so far by humans (except gas giant planets such as Jupiter or Saturn) has a surface of ice or rock or a mixture of the two, but their cores are thought to be metallic. These cores, however, lie far below rocky mantles and crusts and are considered unreachable in our lifetimes.

Psyche, an asteroid that appears to be the exposed nickel-iron core of a protoplanet, one of the building blocks of the sun's planetary system, may provide a window into those cores. The asteroid is most likely a survivor of violent space collisions, common when the solar system was forming.

Psyche follows an orbit in the outer part of the main asteroid belt, at an average distance from the sun of about 280 million miles, or three times farther from the sun than Earth. It is roughly the size of Massachusetts (about 130 miles in diameter) and dense (7,000 kg/m³).

"Being selected to lead this ambitious mission to the all-metal asteroid Psyche is a major milestone that reflects ASU's outstanding research capacity," said Sethuraman Panchanathan, executive vice president and chief research and innovation officer at ASU. "It speaks to our innovative spirit and our world-class scientific expertise in space exploration."

Mission instrument payload

The spacecraft's instrument payload will include magnetometers, multispectral imagers, a gamma ray and neutron spectrometer, and a radio-science experiment.

The multispectral imager, which will be led by an ASU science team, will provide high-resolution images using filters to discriminate between Psyche's metallic and silicate constituents. It consists of a pair of identical cameras designed to acquire geologic, compositional and topographic data.

The gamma ray and neutron spectrometer will detect, measure and map Psyche's elemental composition. The instrument is mounted on a 7-foot (2-meter) boom to distance the sensors from background radiation created by energetic particles interacting with the spacecraft and to provide an unobstructed field of view. The science team for this instrument is based at the Applied Physics Laboratory at Johns Hopkins University.

The magnetometer, which is led by scientists at MIT and UCLA, is designed to detect and measure the remnant magnetic field of the asteroid. It's composed of two identical high-sensitivity magnetic field sensors located at the middle and outer end of the boom.

The Psyche spacecraft will also use an X-band radio telecommunications system, led by scientists at MIT and NASA's Jet Propulsion Laboratory. This instrument will measure Psyche's gravity field and, when combined with topography derived from onboard imagery, will provide information on the interior structure of the asteroid.

The Psyche mission team

In addition to Elkins-Tanton, ASU SESE scientists on the Psyche mission team include Jim Bell, deputy principal investigator and co-investigator, co-investigator Erik Asphaug, and co-investigator David Williams.

NASA's Jet Propulsion Laboratory managed by Caltech is the managing organization and will build the spacecraft with industry partner Space Systems Loral (SSL). JPL's contribution to the Psyche mission team includes over 75 people, led by project manager Henry Stone, project scientist Carol Polanskey, project systems engineer David Oh and deputy project manager Bob Mase. SSL contribution to the Psyche mission team includes over 50 people led by SEP Chassis deputy program manager Peter Lord and SEP Chassis program manager Steve Scott.

Other co-investigators are David Bercovici (Yale University), Bruce Bills (JPL), Richard Binzel (Massachusetts Institute of Technology), William Bottke (Southwest Research Institute -- SwRI), Ralf Jaumann (Deutsches Zentrum für Luft -- und Raumfahrt), Insoo Jun (JPL), David Lawrence (Johns Hopkins University/Applied Physics Laboratory -- APL), Simon Marchi (SwRI), Timothy McCoy (Smithsonian Institution), Ryan Park (JPL), Patrick Peplowski (APL), Thomas Prettyman, (Planetary Science Institute), Carol Raymond (JPL), Chris Russell (UCLA), Benjamin Weiss (MIT), Dan Wenkert (JPL), Mark Wieczorek (Institut de Physique du Globe de Paris), and Maria Zuber (MIT).



In 2013, researchers announced that a pebble found in south-west Egypt, was definitely not from Earth. By 2015, other research teams had announced that the 'Hypatia' stone was not part of any known types of meteorite or comet, based on noble gas and nuclear probe analyses.


(The stone was named Hypatia after Hypatia of Alexandria, the first Western woman mathematician and astronomer.)

However, if the pebble was not from Earth, what was its origin and could the minerals in it provide clues on where it came from? Micro-mineral analyses of the pebble by the original research team at the University of Johannesburg have now provided unsettling answers that spiral away from conventional views of the material our solar system was formed from.

Mineral structure

The internal structure of the Hypatia pebble is somewhat like a fruitcake that has fallen off a shelf into some flour and cracked on impact, says Prof Jan Kramers, lead researcher of the study published in Geochimica et Cosmochimica Acta on 28 Dec 2017.

"We can think of the badly mixed dough of a fruit cake representing the bulk of the Hypatia pebble, what we called two mixed 'matrices' in geology terms. The glace cherries and nuts in the cake represent the mineral grains found in Hypatia 'inclusions'. And the flour dusting the cracks of the fallen cake represent the 'secondary materials' we found in the fractures in Hypatia, which are from Earth," he says.

The original extraterrestrial rock that fell to Earth must have been at least several meters in diameter, but disintegrated into small fragments of which the Hypatia stone is one.

Weird matrix

Straight away, the Hypatia mineral matrix (represented by fruitcake dough), looks nothing like that of any known meteorites, the rocks that fall from space onto Earth every now and then.

"If it were possible to grind up the entire planet Earth to dust in a huge mortar and pestle, we would get dust with on average a similar chemical composition as chondritic meteorites," says Kramers. "In chondritic meteorites, we expect to see a small amount of carbon{C} and a good amount of silicon (Si). But Hypatia's matrix has a massive amount of carbon and an unusually small amount of silicon."

"Even more unusual, the matrix contains a high amount of very specific carbon compounds, called polyaromatic hydrocarbons, or PAH, a major component of interstellar dust, which existed even before our solar system was formed. Interstellar dust is also found in comets and meteorites that have not been heated up for a prolonged period in their history," adds Kramers.

In another twist, most (but not all) of the PAH in the Hypatia matrix has been transformed into diamonds smaller than one micrometer, which are thought to have been formed in the shock of impact with the Earth's atmosphere or surface. These diamonds made Hypatia resistant to weathering so that it is preserved for analysis from the time it arrived on Earth.

Weirder grains never found before

When researcher Georgy Belyanin analyzed the mineral grains in the inclusions in Hypatia, (represented by the nuts and cherries of a fruitcake), a number of most surprising chemical elements showed up.

"The aluminum occurs in pure metallic form, on its own, not in a chemical compound with other elements. As a comparison, gold occurs in nuggets, but aluminum never does. This occurrence is extremely rare on Earth and the rest of our solar system, as far as is known in science," says Belyanin.

"We also found silver iodine phosphide and moissanite (silicon carbide) grains, again in highly unexpected forms. The grains are the first documented to be found in situ (as is) without having to first dissolve the surrounding rock with acid," adds Belyanin. "There are also grains of a compound consisting of mainly nickel and phosphorus, with very little iron; a mineral composition never observed before on Earth or in meteorites," he adds.

Dr Marco Andreoli, a Research Fellow at the School of Geosciences at the University of the Witwatersrand, and a member of the Hypatia research team says, "When Hypatia was first found to be extraterrestrial, it was a sensation, but these latest results are opening up even bigger questions about its origins."

Unique minerals in our solar system

Taken together, the ancient unheated PAH carbon as well as the phosphides, the metallic aluminum, and the moissanite suggest that Hypatia is an assembly of unchanged pre-solar material. That means, matter that existed in space before our Sun, the Earth and the other planets in our solar system were formed.

Supporting the pre-solar concept is the weird composition of the nickel-phosphorus-iron grains found in the Hypatia inclusions. These three chemical elements are interesting because they belong to the subset of chemical elements heavier than carbon and nitrogen which form the bulk of all the rocky planets.

"In the grains within Hypatia the ratios of these three elements to each other are completely different from that calculated for the planet Earth or measured in known types of meteorites. As such these inclusions are unique within our solar system," adds Belyanin.

"We think the nickel-phosphorus-iron grains formed pre-solar, because they are inside the matrix, and are unlikely to have been modified by shock such as collision with the Earth's atmosphere or surface, and also because their composition is so alien to our solar system," he adds.

"Was the bulk of Hypatia, the matrix, also formed before our solar system? Probably not, because you need a dense dust cloud like the solar nebula to coagulate large bodies" he says.

A different kind of dust

Generally, science says that our solar system's planets ultimately formed from a huge, ancient cloud of interstellar dust (the solar nebula) in space. The first part of that process would be much like dust bunnies coagulating in an unswept room. Science also holds that the solar nebula was homogenous, that is, the same kind of dust everywhere.

But Hypatia's chemistry tugs at this view. "For starters, there are no silicate minerals in Hypatia's matrix, in contrast to chondritic meteorites (and planets like the Earth, Mars and Venus), where silicates are dominant. Then there are the exotic mineral inclusions. If Hypatia itself is not presolar, both features indicate that the solar nebula wasn't the same kind of dust everywhere -- which starts tugging at the generally accepted view of the formation of our solar system," says Kramers.

Into the future

"What we do know is that Hypatia was formed in a cold environment, probably at temperatures below that of liquid nitrogen on Earth (-196 Celsius). In our solar system it would have been way further out than the asteroid belt between Mars and Jupiter, where most meteorites come from. Comets come mainly from the Kuiper Belt, beyond the orbit of Neptune and about 40 times as far away from the sun as we are. Some come from the Oort Cloud, even further out. We know very little about the chemical compositions of space objects out there. So our next question will dig further into where Hypatia came from," says Kramers.

The little pebble from the Libyan Desert Glass strewn field in south-west Egypt presents a tantalizing piece for an extraterrestrial puzzle that is getting ever more complex.

The research was funded by University of Johannesburg Research council via the PPM Research Centre.

The researchers would like to thank Aly Barakat, Mario di Martino and Romano Serra for access to the Hypatia sample material; and Michael Wiedenbeck and his co-workers at the Geoforschungszentrum Potsdam, Germany for their collaboration.




Despite the many impressive discoveries humans have made about the universe, scientists are still unsure about the birth story of our solar system.


Scientists with the University of Chicago have laid out a comprehensive theory for how our solar system could have formed in the wind-blown bubbles around a giant, long-dead star. Published Dec. 22 in the Astrophysical Journal, the study addresses a nagging cosmic mystery about the abundance of two elements in our solar system compared to the rest of the galaxy.

The general prevailing theory is that our solar system formed billions of years ago near a supernova. But the new scenario instead begins with a giant type of star called a Wolf-Rayet star, which is more than 40 to 50 times the size of our own sun. They burn the hottest of all stars, producing tons of elements which are flung off the surface in an intense stellar wind. As the Wolf-Rayet star sheds its mass, the stellar wind plows through the material that was around it, forming a bubble structure with a dense shell.

"The shell of such a bubble is a good place to produce stars," because dust and gas become trapped inside where they can condense into stars, said coauthor Nicolas Dauphas, professor in the Department of Geophysical Sciences. The authors estimate that 1 percent to 16 percent of all sun-like stars could be formed in such stellar nurseries.

This setup differs from the supernova hypothesis in order to make sense of two isotopes that occur in strange proportions in the early solar system, compared to the rest of the galaxy. Meteorites left over from the early solar system tell us there was a lot of aluminium-26. In addition, studies, including a 2015 one by Dauphas and a former student, increasingly suggest we had less of the isotope iron-60.

This brings scientists up short, because supernovae produce both isotopes. "It begs the question of why one was injected into the solar system and the other was not," said coauthor Vikram Dwarkadas, a research associate professor in Astronomy and Astrophysics.

This brought them to Wolf-Rayet stars, which release lots of aluminium-26, but no iron-60.

"The idea is that aluminum-26 flung from the Wolf-Rayet star is carried outwards on grains of dust formed around the star. These grains have enough momentum to punch through one side of the shell, where they are mostly destroyed -- trapping the aluminum inside the shell," Dwarkadas said. Eventually, part of the shell collapses inward due to gravity, forming our solar system.

As for the fate of the giant Wolf-Rayet star that sheltered us: Its life ended long ago, likely in a supernova explosion or a direct collapse to a black hole. A direct collapse to a black hole would produce little iron-60; if it was a supernova, the iron-60 created in the explosion may not have penetrated the bubble walls, or was distributed unequally.

Other authors on the paper included UChicago undergraduate student Peter Boyajian and Michael Bojazi and Brad Meyer of Clemson University.


The world's population has broken the 7 billion person milestone. Out of those people, an estimated 6 billion people currently have access to cellphones. To put that into perspective, only 4.5 billion of those people have access to toilets. These cell phones, combined with the use of decentralized and inexpensive cube satellites, will make it possible for all the people on earth to have access to currency.

Bitcoin in space is not a new idea. It is an idea that graced the minds of brilliant men who first fell upon cryptocurrency technology.

The idea is that Bitcoin, other cryptocurrencies and networks in space can be broadcast everywhere on earth. Using decentralized cube satellites to broadcast the blockchain and even a decentralized internet to populations who currently have no access or access is severely restricted.

Then on the ground, with the use of mesh networks, decentralized satellite up-link node stations all connected with point of sale devices and individual technology communication devices, the network will continue to decentralize and expand.

This idea is brilliant but no one has been able to make it come to a reality... Yet.
It's an idea that Jeff Garzik (Core Bitcoin developer) attempted to bring alive with Dunvegan Space Systems. Mainly a one man mission with a passion, he was unable to attain the resources to make it happen. This has not stopped him or others within the community and outside to continue to push for a free and decentralized network.

Recently Elon Musk (Lead Founder of Space-X) requested permission from the FCC to allow for testing of high powered radio signals to earth from satellites meant to broadcast free WIFI to the world. Realistically, this project is still four years out in development. Not a main focus on decentralized economies and definitely not incorporating Blockchain financial tech...yet.

Then there is James Cantrell (another founder of Space-X and owner of Vector Space Systems) and his focus specifically on these said systems. Research and development by VSS has not been rushed but continues to press on in specifically the micro satellite and rocket launch system business. It is estimated that Vector space is years ahead of competitors in making this a reality.

So what does Vector Space systems have to do with Bitcoin and cryptocurrency?

Colin Cantrell. This is the son of Vector Space Systems CEO, James Cantrell. Colin is the lead developer (and many would argue a cryptocurrency genius) of the cryptocurrency project called Nexus. Nexus is being developed to be the strongest encrypted cryptocurrency on earth. Other focused developments with Nexus is building the infrastructure and foundation to allow for widespread adoption.

Such technologies as Lower Level Protocol, Lower Level Libraries, Variable Nexus Proof Of Stake Interest, Trust Keys with reputation, 571 bit encrypted private keys to prepare for quantum computing, three channels of mining and trust to prevent 51% attack, decentralized checkpoints, fast and decentralized clocks, decaying algorithms for a slow and less disturbing decay of distribution of supply compared to block halving, reserve systems for mining and more... All mostly coded from scratch.

Incorporated in the economic model of Nexus is checks and balances which will allow anyone to be rewarded for maintaining the network based on time and trust without a massive use of energy. This is important as this allows anyone in the world to enter marketplace compared to fiat and other cryptocurrencies which require a lot of money and power (mostly political). It is also important that as adoption gains momentum, you do not overburden the energy system to facilitate fast and efficient transactions and keep it affordable enough anyone can enter.

"Not everyone has money but everyone has time"
-Colin Cantrell

Colin and James are both currently working on the very foundations of building a decentralized economic network which has the capability to reach everyone and actually work. From the root of the code in Nexus to the thousands of satellites that will orbit earth broadcasting Light Fidelity (Li-Fi) of the blockchain to the whole world, the development produced this far is no small undertaking.

This is a game changing event. This is quite literally as big as the creation of the Internet but far more powerful. For the first time the people of the world will be within reach of truly decentralized banking, government and information.

We see each cryptocurrency add something new to the table when it comes to advancing blockchain technology and giving it real world use.

Take Steemit for example: a decentralized and political neutral algorithmic social media website which pays the idea makers over the infrastructure owners. Instead of publishers getting paid over authors, they cut the middleman out. Just like Bitcoin did with the banks. The idea that authors should be compensated voluntarily is so successful, some authors on Steemit have made more than many authors writing articles have in a whole year!

Some cryptocurrencies are fads, some are hypes... But some have real value and add to the value of Bitcoin.
Space is very similar to the laws of international waters. Anything goes and no one can regulate effectively. It's truly a frontier looking to be conquered by mankind. Like any frontier, it offers freedom in exchange for exploration. Good luck regulating light.

What a perfect place for Bitcoin, something that needs space to grow, to grow... In space.

Nexus is not looking to replace Bitcoin. Nexus is looking to bring Bitcoin to the world with Nexus. Nexus means to link things together. To link Bitcoin with Nexus, Bitcoin with people and people with people.
Nexus is the next big thing. The Internet of the people, by the people and for the people. The We The People Network. Where a merchant can do business in Africa with a customer in China without the central banks or telecommunications industry being involved. Truly decentralized from the code to the satellite.

Some cryptocurrencies want to go to the moon. Nexus wants to bring crypto to earth. Nexus Earth. Will you join us?






On June 17, NASA's MAVEN (Mars Atmosphere and Volatile Evolution Mission) will celebrate 1,000 Earth days in orbit around the Red Planet. Since its launch in November 2013 and its orbit insertion in September 2014, MAVEN has been exploring the upper atmosphere of Mars. MAVEN is bringing insight to how the sun stripped Mars of most of its atmosphere, turning a planet once possibly habitable to microbial life into a barren desert world.


"MAVEN has made tremendous discoveries about the Mars upper atmosphere and how it interacts with the sun and the solar wind," said Bruce Jakosky, MAVEN principal investigator from the University of Colorado, Boulder. "These are allowing us to understand not just the behavior of the atmosphere today, but how the atmosphere has changed through time."

During its 1,000 days in orbit, MAVEN has made a multitude of exciting discoveries. Here is a countdown of the top 10 discoveries from the mission:

10. Imaging of the distribution of gaseous nitric oxide and ozone in the atmosphere shows complex behavior that was not expected, indicating that there are dynamical processes of exchange of gas between the lower and upper atmosphere that are not understood at present.

9. Some particles from the solar wind are able to penetrate unexpectedly deep into the upper atmosphere, rather than being diverted around the planet by the Martian ionosphere; this penetration is allowed by chemical reactions in the ionosphere that turn the charged particles of the solar wind into neutral atoms that are then able to penetrate deeply.

8. MAVEN made the first direct observations of a layer of metal ions in the Martian ionosphere, resulting from incoming interplanetary dust hitting the atmosphere. This layer is always present, but was enhanced dramatically by the close passage to Mars of Comet Siding Spring in October 2014.

7. MAVEN has identified two new types of aurora, termed "diffuse" and "proton" aurora; unlike how we think of most aurorae on Earth, these aurorae are unrelated to either a global or local magnetic field.

6. These aurorae are caused by an influx of particles from the sun ejected by different types of solar storms. When particles from these storms hit the Martian atmosphere, they also can increase the rate of loss of gas to space, by a factor of ten or more.

5. The interactions between the solar wind and the planet are unexpectedly complex. This results due to the lack of an intrinsic Martian magnetic field and the occurrence of small regions of magnetized crust that can affect the incoming solar wind on local and regional scales. The magnetosphere that results from the interactions varies on short timescales and is remarkably "lumpy" as a result.

4. MAVEN observed the full seasonal variation of hydrogen in the upper atmosphere, confirming that it varies by a factor of 10 throughout the year. The source of the hydrogen ultimately is water in the lower atmosphere, broken apart into hydrogen and oxygen by sunlight. This variation is unexpected and, as yet, not well understood.

3. MAVEN has used measurements of the isotopes in the upper atmosphere (atoms of the same composition but having different mass) to determine how much gas has been lost through time. These measurements suggest that 2/3 or more of the gas has been lost to space.

2. MAVEN has measured the rate at which the sun and the solar wind are stripping gas from the top of the atmosphere to space today, along with the details of the removal processes. Extrapolation of the loss rates into the ancient past -- when the solar ultraviolet light and the solar wind were more intense -- indicates that large amounts of gas have been lost to space through time.

1. The Mars atmosphere has been stripped away by the sun and the solar wind over time, changing the climate from a warmer and wetter environment early in history to the cold, dry climate that we see today.

"We're excited that MAVEN is continuing its observations," said Gina DiBraccio, MAVEN project scientist from NASA's Goddard Space Flight Center in Greenbelt, Maryland. "It's now observing a second Martian year, and looking at the ways that the seasonal cycles and the solar cycle affect the system."

MAVEN began its primary science mission on November 2014, and is the first spacecraft dedicated to understanding Mars' upper atmosphere. The goal of the mission is to determine the role that loss of atmospheric gas to space played in changing the Martian climate through time. MAVEN is studying the entire region from the top of the upper atmosphere all the way down to the lower atmosphere so that the connections between these regions can be understood.