Connect with us

Tech

How long will the Hubble Telescope last?

Published

on

How long will the Hubble Telescope last?


Hubble’s aging hardware was last serviced directly in 2009 by space shuttle astronauts, and engineers estimated back then that it would last until around 2016. “After a few years in flight with all the refurbishes, engineers reevaluated the survivability and reliability of the instruments and started pushing everything much further out,” says Tom Brown, the head of the Hubble Space Telescope mission office at the Space Telescope Science Institute in Baltimore. “The most recent estimates say that there’s an excellent chance we’re going to be doing science like we do today until at least 2026, and perhaps the whole decade. It’s looking pretty good right now.” 

Hubble has been used in practically every kind of astronomy investigation: studying planets and moons in our own solar system; peering at distant stars, galaxies, supernovas, nebulas, and other astrophysical phenomena; studying the origins and expansion of the universe. 

Its work in exoplanet science in the last decade has been especially surprising, considering that when the telescope was launched in 1990, we were still five years away from detecting the first exoplanet orbiting a sun-like star. Hubble isn’t useful for actually finding exoplanets but, rather, for follow-up observations that can characterize planets and their atmospheres once they’re found. When the James Webb Space Telescope launches later this year, the two observatories combined might finally help scientists identify an Earth-like world that’s truly hospitable to life. 

The JWST is often promoted as Hubble’s successor, but that isn’t quite right. Hubble can observe the universe in visible and ultraviolet wavelengths, while JWST’s focus is on infrared observations, which help us study early-universe objects and characterize the chemistry on other worlds. Being situated in space, Hubble doesn’t have to worry about inference caused by Earth’s atmosphere, which is especially detrimental to ultraviolet observations (the ozone layer blocks out most UV radiation).

This is also critical when we need eyes to study poorly understood phenomena. Take the 2017 detection of gravitational waves produced by the collision of two neutron stars. Hubble was able to observe the event’s aftermath, providing data outside the infrared spectrum that was used to define the shape and evolution of the merger in crisper detail. 

Four major scientific instruments are currently active onboard Hubble, so even if one or two things stop working, there is still a ton of major science the rest of the observatory can do. The telescope is also built with a lot of redundancy, so single hardware and software failures don’t necessarily stop individual instruments from working. 

That being said, there are no plans for a new service mission. If there’s a catastrophic failure that takes Hubble entirely offline, it’s hard to see NASA greenlighting a repair mission for an observatory that’s over three decades old. 

So what replaces Hubble when it’s finally ready to retire? Brown says other nations have nascent plans to put other missions in orbit that could take up the visible and UV investigations currently run by Hubble. India’s Astrosat space telescope currently does UV observations from space, but with a much smaller aperture. China is looking to launch a space telescope called Xuntian in 2024, and state media says it will observe an area of space 300 times larger than Hubble can. 

The true successor to Hubble might be NASA’s proposed Large Ultraviolet Optical Infrared Surveyor space telescope, or LUVOIR, a general-purpose observatory capable of observing in multiple wavelengths (including infrared, optical, and ultraviolet). But if funded, LUVOIR wouldn’t launch until 2039 at the earliest

It’s possible Hubble will stay on until it can be truly replaced, but most astronomers are bracing for a big knowledge gap when it finally stops working. “Hubble is really the premier game for doing ultraviolet and optical astronomy,” says Brown. “So much of astronomy, especially when it comes to understanding temperature and chemistry in outer space, hinges on the information you can really get from it. I fear the space community is really going to feel the loss when Hubble stops working.”

Tech

Meet Jennifer Daniel, the woman who decides what emoji we get to use

Published

on

Meet Jennifer Daniel, the woman who decides what emoji we get to use


Emoji are now part of our language. If you’re like most people, you pepper your texts, Instagram posts, and TikTok videos with various little images to augment your words—maybe the syringe with a bit of blood dripping from it when you got your vaccination, the prayer (or high-fiving?) hands as a shortcut to “thank you,” a rosy-cheeked smiley face with jazz hands for a covid-safe hug from afar. Today’s emoji catalogue includes nearly 3,000 illustrations representing everything from emotions to food, natural phenomena, flags, and people at various stages of life.

Behind all those symbols is the Unicode Consortium, a nonprofit group of hardware and software companies aiming to make text and emoji readable and accessible to everyone. Part of their goal is to make languages look the same on all devices; a Japanese character should be typographically consistent across all media, for example. But Unicode is probably best known for being the gatekeeper of emoji: releasing them, standardizing them, and approving or rejecting new ones.

Jennifer Daniel is the first woman at the helm of the Emoji Subcommittee for the Unicode Consortium and a fierce advocate for inclusive, thoughtful emoji. She initially rose to prominence for introducing Mx. Claus, a gender-inclusive alternative to Santa and Mrs. Claus; a non-gendered person breastfeeding a non-gendered baby; and a masculine face wearing a bridal veil. 

Now she’s on a mission to bring emoji to a post-pandemic future in which they are as broadly representative as possible. That means taking on an increasingly public role, whether it’s with her popular and delightfully nerdy Substack newsletter, What Would Jennifer Do? (in which she analyzes the design process for upcoming emoji), or inviting the general public to submit concerns about emoji and speak up if they aren’t representative or accurate.

“There isn’t a precedent here,” Daniel says of her job. And to Daniel, that’s exciting not just for her but for the future of human communication.

I spoke to her about how she sees her role and the future of emoji. The interview has been lightly edited and condensed. 

What does it mean to chair the subcommittee on emoji? What do you do?

It’s not sexy. [laughs] A lot of it is managing volunteers [the committee is composed of volunteers who review applications and help in approval and design]. There’s a lot of paperwork. A lot of meetings. We meet twice a week.

I read a lot and talk to a lot of people. I recently talked to a gesture linguist to learn how people use their hands in different cultures. How do we make better hand-gesture emoji? If the image is no good or isn’t clear, it’s a dealbreaker. I’m constantly doing lots of research and consulting with different experts. I’ll be on the phone with a botanical garden about flowers, or a whale expert to get the whale emoji right, or a cardiovascular surgeon so we have the anatomy of the heart down. 

There’s an old essay by Beatrice Warde about typography. She asked if a good typeface is a bedazzled crystal goblet or a transparent one. Some would say the ornate one because it’s so fancy, and others would say the crystal goblet because you can see and appreciate the wine. With emoji, I lend myself more to the “transparent crystal goblet” philosophy. 

Why should we care about how our emoji are designed?

My understanding is that 80% of communication is nonverbal. There’s a parallel in how we communicate. We text how we talk. It’s informal, it’s loose. You’re pausing to take a breath. Emoji are shared alongside words.

When emoji first came around, we had the misconception that they were ruining language. Learning a new language is really hard, and emoji is kind of like a new language. It works with how you already communicate. It evolves as you evolve. How you communicate and present yourself evolves, just like yourself. You can look at the nearly 3,000 emoji and it [their interpretation] changes by age or gender or geographic area. When we talk to someone and are making eye contact, you shift your body language, and that’s an emotional contagion. It builds empathy and connection. It gives you permission to reveal that about yourself. Emoji can do that, all in an image.

Continue Reading

Tech

Product design gets an AI makeover

Published

on

Product design gets an AI makeover


It’s a tall order, but one that Zapf says artificial intelligence (AI) technology can support by capturing the right data and guiding engineers through product design and development.

No wonder a November 2020 McKinsey survey reveals that more than half of organizations have adopted AI in at least one function, and 22% of respondents report at least 5% of their companywide earnings are attributable to AI. And in manufacturing, 71% of respondents have seen a 5% or more increase in revenue with AI adoption.

But that wasn’t always the case. Once “rarely used in product development,” AI has experienced an evolution over the past few years, Zapf says. Today, tech giants known for their innovations in AI, such as Google, IBM, and Amazon, “have set new standards for the use of AI in other processes,” such as engineering.

“AI is a promising and exploratory area that can significantly improve user experience for designing engineers, as well as gather relevant data in the development process for specific applications,” says Katrien Wyckaert, director of industry solutions for Siemens Industry Software.

The result is a growing appreciation for a technology that promises to simplify complex systems, get products to market faster, and drive product innovation.

Simplifying complex systems

A perfect example of AI’s power to overhaul product development is Renault. In response to increasing consumer demand, the French automaker is equipping a growing number of new vehicle models with an automated manual transmission (AMT)—a system that behaves like an automatic transmission but allows drivers to shift gears electronically using a push-button command.

AMTs are popular among consumers, but designing them can present formidable challenges. That’s because an AMT’s performance depends on the operation of three distinct subsystems: an electro-mechanical actuator that shifts the gears, electronic sensors that monitor vehicle status, and software embedded in the transmission control unit, which controls the engine. Because of this complexity, it can take up to a year of extensive trial and error to define the system’s functional requirements, design the actuator mechanics, develop the necessary software, and validate the overall system.

In an effort to streamline its AMT development process, Renault turned to Simcenter Amesim software from Siemens Digital Industries Software. The simulation technology relies on artificial neural networks, AI “learning” systems loosely modeled on the human brain. Engineers simply drag, drop, and connect icons to graphically create a model. When displayed on a screen as a sketch, the model illustrates the relationship between all the various elements of an AMT system. In turn, engineers can predict the behavior and performance of the AMT and make any necessary refinements early in the development cycle, avoiding late-stage problems and delays. In fact, by using a virtual engine and transmissions as stand-ins while developing hardware, Renault has managed to cut its AMT development time almost in half.

Speed without sacrificing quality

So, too, are emerging environmental standards prompting Renault to rely more heavily on AI. To comply with emerging carbon dioxide emissions standards, Renault has been working on the design and development of hybrid vehicles. But hybrid engines are far more complex to develop than those found in vehicles with a single energy source, such as a conventional car. That’s because hybrid engines require engineers to perform complex feats like balancing the power required from multiple energy sources, choosing from a multitude of architectures, and examining the impact of transmissions and cooling systems on a vehicle’s energy performance.

“To meet new environmental standards for a hybrid engine, we must completely rethink the architecture of gasoline engines,” says Vincent Talon, head of simulation at Renault. The problem, he adds, is that carefully examining “the dozens of different actuators that can influence the final results of fuel consumption and pollutant emissions” is a lengthy and complex process, made by more difficult by rigid timelines.

“Today, we clearly don’t have the time to painstakingly evaluate various hybrid powertrain architectures,” says Talon. “Rather, we needed to use an advanced methodology to manage this new complexity.”

For more on AI in industrial applications, visit www.siemens.com/artificialintelligence.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Tech

AI consumes a lot of energy. Hackers could make it consume more.

Published

on

AI consumes a lot of energy. Hackers could make it consume more.


The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation. 

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”

Continue Reading

Copyright © 2020 Diliput News.