الأربعاء، 30 نوفمبر 2022

NEWS TECHNOLOGIE

(Image: Sony)
Just about every household name in the tech space wants a piece of the metaverse pie, and Sony is no different. On Tuesday the company launched Mocopi, a motion-tracking system that translates one’s physical movements into animation in a virtual space.

Mocopi, named for its motion capture abilities, uses six wireless trackers worn on the wrists, ankles, head, and back to record a user’s movement in real time. The color-coded trackers are each about the size and shape of an Apple AirTag and attach to stretchy bands and a pants clip for easy wearing. As the user moves around, an algorithm translates information from the trackers’ tiny sensors into data received by a dedicated smartphone app.

As of now, it appears Sony’s Mocopi smartphone app serves only as a demo for the system’s motion-tracking capabilities. Its promotional video depicts in-sync 3D avatar dances that can be recorded and played back later. Per Sony’s Japanese press release, however, the company will release a software development kit (SDK) in mid-December. The SDK will allow developers to combine Mocopi’s motion-tracking with metaverse services and 3D development software to create interactive fitness and community experiences.

Though it hasn’t yet confirmed any specific plans, Sony says it will eventually work at partnering with other companies to create exclusive Mocopi experiences. Rather than holding onto bulky controllers to move around in a virtual reality (VR) space, third parties could combine Mocopi with headset-centered experiences to become more immersive and allow “new” movements like kicks—something handheld controllers can’t facilitate.

Mocopi is 49,500 yen, or approximately $356. Though that isn’t prohibitively expensive on its own, it’s a bit much for something that’ll likely only work with a few non-VR headset games. With a headset, you’re looking at quite an expensive setup: Mocopi, a VR headset, and a console or PC can together cost thousands. It’s still cool technology, though, given it’s easier to get into the zone when you aren’t holding a bunch of hardware.

Mocopi’s success will ultimately depend on how many virtual experiences it’s compatible with. The metaverse hasn’t been looking too hot lately, but even if that were to fail entirely, VR and augmented reality (AR) markets might accept Mocopi with open arms. Animators might be interested in real-time motion tracking hardware like Mocopi, too, as it (ideally) helps to reveal people’s natural movements.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/TQGU4D8

NEWS TECHNOLOGIE

Mars is the only known planet aside from Earth that has polar ice caps, but unlike Earth, the ice on Mars is mostly of the “dry” carbon dioxide variety. Naturally, there’s great interest in better understanding the Martian polar regions. A new analysis of Mars using data from NASA’s Mars Reconnaissance Orbiter (MRO) has revealed previously hidden structures under the northern ice cap — as seen above by the Mars Global Surveyor. The researchers found wavy landscapes, impact craters, and even a large canyon all buried under the ice.

The Mars Reconnaissance Orbiter has been observing the red planet from above since 2006. Among its suite of instruments is a special type of penetrating radar called Shallow Subsurface Radar (SHARAD). It emits radar waves between 15 and 25 megahertz, which can pass through up to 4 kilometers of material before bouncing back to the orbiter. It has a depth resolution of about 15 meters. This instrument has been returning data on the ice caps and other regions of Mars for years, but the team from the Planetary Science Institute (PSI) did something new with it.

SHARAD was included on the MRO to complement the MARSIS radar on the Mars Express orbiter. MARSIS can bounce its radar waves deeper into the planet, but SHARAD has a much higher resolution. And with clever data processing, the PSI team boosted the effective resolution. The team processed years of 2D scans from SHARAD using advanced 3D imaging methods to remove noise and interference. The result is a sharper 3D image of the planetary structures below the layers of frozen carbon dioxide. This 3D “radargram” makes it possible to identify things that are difficult or impossible to see otherwise.

The study has been published in the Planetary Science Journal. The image above is a composite, showing a single horizontal slice through the northern polar ice cap (known as Planum Boreum) at the bottom. The other sections are vertical slices. The dark areas near the middle is a 300-kilometer region that MRO cannot see from its orbit. The data reveals surface details like the chasm to the right and the impact crater at the bottom of the horizontal slice.

The team believes this same technique could be used to create 3D representations of structures in other regions of the planet. Next, they plan to scour the Planum Boreum data for more buried impact craters and structures.

Now read:



from ExtremeTechExtremeTech https://ift.tt/i4XMZ7Y

NEWS TECHNOLOGIE

Image: NASA

Just over halfway through its 26-day mission, the Orion capsule has reached its greatest distance away from Earth. Previously, the Apollo 13 mission had held the all-time record: 248,000 miles. But at its farthest, Orion was 270,000 miles away from the planet’s surface. And while it was out there, it snapped this selfie of itself and the Earth:

Image: NASA

In the full image, you can see the Orion capsule, with the Earth and Moon in the background. At this moment, the capsule was 268,547 miles from Earth and 43,318 miles from the Moon, traveling at 1,679 miles per hour. Telemetry remains nominal as of early Wednesday morning.

While Artemis 1 has no human crew members, the flight does still have passengers. Orion is carrying special mannequins, or ‘moonikins,’ whose prime directive is to test out next-generation space gear. NASA administrator Bill Nelson explained, during a Monday evening briefing from the agency’s Johnson Space Center in Houston:

“Many of us know: Arturo Campos was a NASA engineer who developed a plan to bring the crippled Apollo 13 crew home safely. For a mission where something terrible went wrong, it’s in the annals at NASA as one of our most successful missions — because they saved the crew. Well, on Orion now is Commander Moonikin Campos, his namesake. […] He’s outfitted with sensors to provide data on what crew members will experience in flight. And that data will continue Campos’ legacy of enabling human exploration in deep space.”

Beside Cmdr. Campos ride two other ‘moonikins,’ Helga and Zohar. All three are positively bristling with sensors that will tell NASA about the radiation environment and kinetic forces that lunar astronauts will experience. Cmdr. Campos is also wearing a radiation protection vest that the agency is testing for later Artemis flights. In addition, Helga and Zohar are both built to test out protective gear in more inclusive sizes.

To Boldly Go

NASA has a gender problem. The agency infamously asked its first female astronaut, Sally Ride, whether a hundred tampons would be enough for the two-week flight. Decades later, Ride was still laughing into her coffee about NASA’s desperately oblivious ideas on what female astronauts might need. Despite being literal, actual rocket scientists, Ride told the agency’s History Office in a 2002 interview, these men thought space makeup was an essential part of a female astronaut’s EDC. Never mind a zero-G toilet that can accommodate female anatomy. Gimme that space blush.

Surprising few, the kit was never used. Meanwhile, NASA finally fielded an anatomically inclusive toilet on the International Space Station — in 2020.

Space blush, oy. The powdery particulate alone should have killed this idea before it ever made it off a napkin. Besides, someone’s double standards are showing — I don’t see space guyliner.

But the joke falls flat when the spaceship safety harnesses and space suits don’t fit astronauts right, because they’re sized for just one type of male body. NASA had to shuffle the roster for a 2019 spacewalk outside the ISS, because it simply didn’t have enough mix-and-match space suit parts to garb both Anne McClain and Christina Koch, the two females who would have done the excursion, at the same time. Instead, a male astronaut took McClain’s place on the spacewalk. Female bodies are statistically shorter and slenderer than males. As a result, females sustain a disproportionate number of injuries in accidents and collisions. But the female-bodied ‘moonikins’ aboard Orion are the vanguard of a change.

What Happens Next

Orion just passed the halfway mark in its mission to the moon and back. It will remain in a distant retrograde lunar orbit (DRO) until a few days into December when it makes its first course correction burn to head back Earthside. In this mission itinerary, we’re between steps eleven and twelve:

Image: NASA

During a briefing on flight day 13 (Monday), Artemis 1 mission manager Mike Sarafin said that two-thirds of Orion’s docket is complete or in progress. Many of the spacecraft’s remaining “real-time objectives” take place during the descent phase. “We’re continuing to proceed along the nominal mission,” said Sarafin, “and we’ve passed the halfway point in terms of distance from earth, time in the mission plan, and in terms of mission objectives.”

But the mission is going well. Artemis 1 lead flight director Rick Labrode said that the team opted out of the most recent of Orion’s nineteen scheduled burns. Sarafin also reported that the mission has actually “close[d] one of our anomaly resolution teams associated with the star trackers and the random access memory built-in test hardware that we’re seeing a number of funnies on.”

“The next greatest test for Orion (after the launch),” said Nelson, “is the landing.” Orion will hit our atmosphere at around 25,000 mph. For reference, that’s about Mach 32. The capsule will dip into the atmosphere to slow itself to a mere 17,000 mph, or Mach 22, added Nelson. Artemis 1 will end when Orion splashes down in the Pacific on December 11.

The inaugural Artemis flight had only its ‘moonikins’ aboard. However, future missions will carry human crew members. Artemis 2 will fly four human crew members to lunar orbit. “They are going to the Moon, to lunar orbit, in preparation for Artemis 3,” Nelson said. Rather than confining itself to lunar orbit, Artemis 3 will actually land humans on the lunar surface. For Artemis 3, Nelson said, “We will have four [astronauts] go into a lunar polar elliptical orbit, and we’ll then have two astronauts in the Lander go down to the surface. That will be the first woman, and the next man.”

Now Read:


from ExtremeTechExtremeTech https://ift.tt/wyQai3k

NEWS TECHNOLOGIE

Samsung's new memory features Fan-Out, Wafer-Level Packaging. (Image: Samsung)

In the past, chip companies such as AMD have dabbled in High-Bandwidth Memory (HBM) instead of GDDR to increase memory bandwidth for GPUs. This vertically stacked memory boasts incredible bandwidth, but it’s a costly endeavor. AMD abandoned it in favor of GDDR memory after its ill-fated R9 Fury and Vega GPUs. Now Samsung has created a new type of GDDR6 memory it says is almost as fast as HBM without needing an interposer. Samsung says GDDR6W is the first “next-generation” DRAM technology, and that it will empower more realistic metaverse experiences.

Samsung took its existing GDDR6 platform and built it with Fan-Out Wafer-Level Packaging (FOWLP). With this technology, the memory die is mounted to a silicon wafer instead of a printed circuit board (PCB). Redistribution layers are fanned out around the chip allowing for more contacts and better heat dissipation. Memory chips are also double-stacked. Samsung says this has allowed it to increase bandwidth and capacity in the exact same footprint as before. Since there’s no increase in die size, its partners can drop GDDR6W into existing and future designs without any modifications. This will theoretically reduce manufacturing time and costs.

Samsung’s Fan-Out, Wafer-Level Packaging allows for a smaller package thanks to the absence of a PCB. (Credit: Samsung)

The new memory offers double the I/O and bandwidth of GDDR6. Using its existing 24Gb/s GDDR6 as an example, Samsung says the GDDR6W version has twice the I/O as there are more contact points. It also doubles capacity from 16Gb to 32Gb per chip. As shown above, the height of the FOWLP design is just 0.7mm, which is 36 percent lower than its DDR package. Even though I/O and bandwidth have been doubled, it says it has the same thermal properties as existing DDR6 designs.

Samsung says these advancements have allowed its GDDR6W design to compete with HBM2. It notes that second-generation HBM2 offers 1.6TB/s of bandwidth, with GDDR6W coming close with 1.4TB/s. However, that number from Samsung is using a 512-bit wide memory bus with 32GB of memory, which isn’t something found in current GPUs. Both the Nvidia RTX 4090 and the Radeon RX 7900 XTX have a 384-bit wide memory bus and offer just 24GB of GDDR6 memory. AMD uses GDDR6 while Nvidia has opted for the G6X variant made by Micron. Both cards have around 1TB/s of memory bandwidth, though, so Samsung’s offering is superior.

The big news here is that thanks to Samsung’s chip-stacking, half the memory chips are required to achieve the same amount of memory as current packaging. This could result in reduced manufacturing costs. Overall, its maximum transmission rate per pin of 22Gb/s is very close to GDDR6X’s 21Gb/s. So the gains in the future probably won’t be for maximum performance, but rather memory capacity. You could argue nobody needs a GPU with 48GB of memory, but perhaps when we’re gaming at 16K that’ll change.

As far as products go, Samsung says it’ll be introducing GDDR6W soon in small form factor packages such as notebooks. It’s also working with partners to include it in AI accelerators and such. It’s unclear whether AMD or Nvidia will adopt it, but if they do it’ll likely be far in the future. That’s just because both companies are already manufacturing their current boards with GDDR6/X designs, so we doubt they’d swap until a new architecture arrives.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/WcoLX1p

الثلاثاء، 29 نوفمبر 2022

NEWS TECHNOLOGIE

(Photo by Drew Angerer/Getty Images)

Microsoft’s bid to acquire Activision Blizzard is on thin ice. Antitrust regulators from several regions have been reviewing the deal all year, and several have recently expressed concern that the acquisition could drastically reduce competition within the video game industry. With European Union watchdogs especially wary, Microsoft appears prepared to do whatever it takes to push the deal through—even if it means making concessions to Sony.

The EU opened a deeper probe into the bid in early November following a marked spike in concerns over Activision’s most successful franchises, particularly Call of Duty. Sony, Microsoft’s biggest gaming competitor, has repeatedly noted in no uncertain terms that Microsoft’s acquisition of Activision could mean a rapid loss of vital content for PlayStation players. Not only would Microsoft’s acquisition of Activision make Microsoft the third largest gaming company in the world, but from Sony’s perspective, sudden Call of Duty exclusivity could push former PlayStation devotees to PC or Xbox.

Microsoft has previously attempted to assuage these worries in two starkly different ways. At first, its tactic was to assure Sony (and the rest of the world) that its console compatibility agreements involving Call of Duty and other major Activision titles would remain in effect past their contracted timelines. Then it changed tack, telling Sony and antitrust regulators that Activision didn’t have any “must-have” titles. (Read: “So just stop stressing about it, okay?”)

(Image: Activision Blizzard)

Those strategies don’t seem to have had the effect Microsoft wanted. According to a new Reuters report, the EU is set to publish a formal list of competition concerns (called a “statement of objection”) regarding the deal in January. Microsoft, clearly eager to get ahead of whatever the EU has in store, is reportedly preparing to offer Sony a 10-year Call of Duty license to sweeten the deal.

The possible 10-year agreement is a touch ironic given Microsoft’s previous insistence that it would keep major Activision titles available on PlayStation regardless, but of course it’s always best to get those types of promises on paper. Still, even if Microsoft does formally submit such an offer, there’s no guarantee that it’ll be accepted both by Sony and by the necessary authorities. If it is, legal experts believe the license could accelerate the review process and appease any concerns raised in January.

This doesn’t mean Microsoft would be cleared for takeoff, however. Three sources told Politico last week that the US Federal Trade Commission (FTC) is likely to challenge the $69 billion deal via lawsuit. Though nothing has been filed yet, an FTC lawsuit could mean the end of Microsoft’s bid, which aims to wrap up its acquisition of Activision by July 2023.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/e4vudFK

NEWS TECHNOLOGIE

Trails in the night sky left by BlueWalker 3 are juxtaposed against the Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory in Arizona, a Program of NSF's NOIRLab. The breaks in the trail are caused by breaks between four twenty second exposures that were stacked to create this image.

Cell phone towers in space may be the next frontier of mobile communication, but astronomers are starting to get worried. AST SpaceMobile successfully deployed its new BlueWalker 3 communication satellite, and the International Astronomical Union (IAU) says this enormous satellite is now one of the brightest objects in the sky. The IAU warns that the proliferation of objects like BlueWalker 3 could have disastrous effects on astronomy.

AST SpaceMobile launched BlueWalker 3 in September aboard a SpaceX Falcon 9 rocket, deploying its expansive communication array earlier this month. The satellite has a total surface area of 693 square feet (64 square meters), and it’s in a low-Earth orbit. Even before launch, many in the astronomical community feared this object would outshine nearly all the stars in the sky, and that’s exactly what happened.

BlueWalker 3 needs that gigantic antenna because of the way it intends to deliver connectivity directly to existing cell phones. The antenna in your phone is designed to talk to nearby towers on the ground — getting connected to a satellite is much harder. Satellite phones usually have bulky adjustable antennas, but no one wants to carry one of those around. Thus, BlueWalker 3 has its giant antenna array to deliver 4G and 5G service. The company plans to use BW3 to test services that could eventually come to partners like AT&T and Vodafone.

Officially, the IAU is “troubled” by the “unprecedented brightness” of BlueWalker 3, but it does not necessarily oppose the launch of such satellites. Increasing connectivity in underserved areas is a noble goal, but the group is asking companies to adopt technologies and designs that minimize the impact these satellites have on astronomy. To make the point, the IAU has provided some sample images of BlueWalker 3 photobombing telescopes. There is also concern that blasting cellular signals from space will increase interference at radio observatories, which are often built as far away from cell phone towers as possible.

Observation using the 0.6-meter Chakana telescope at the Ckoirama Observatory in Chile. This 10-second image shows BlueWalker 3 with a measured apparent magnitude of V=6.6 at a range of 865 km. Observers: Eduardo Unda-Sanzana, Christian Adam, and Juan Pablo Colque.

AST SpaceMobile is not alone in this quest to bring cellular service to space. Apple recently enabled Emergency SOS satellite communication via Globalstar, but it only supports text messaging with significant delays. Meanwhile, SpaceX and T-Mobile want to provide text and voice calls with next-gen Starlink v2 satellites. Astronomers are already up in arms about the existing Starlink constellation ruining images, and the larger Starlink v2 could be almost as bad for astronomy as BlueWalker 3. The skies are getting a lot more crowded, which makes space-based instruments like the James Webb Space Telescope all the more vital.

Now read:



from ExtremeTechExtremeTech https://ift.tt/j3pqQP4

NEWS TECHNOLOGIE

(Photo: Raysonho @ Open Grid Scheduler/Wikimedia Commons)
Epson, the Japanese hardware corporation best known for its printers, is sunsetting its laser printer division due to sustainability concerns. The company has quietly chosen to stop selling laser printer hardware by 2026. The company will instead focus on its more environmentally-friendly inkjet printers, according to a statement obtained by The Register. Although the company stopped selling laser printers in the United States a while back, it had maintained the line in other markets, including Europe and Asia. Consumers will no longer be able to purchase new Epson laser printers as of 2026, but Epson has promised to continue supporting existing customers via supplies and spare parts.

Epson itself claims its inkjets are up to 85 percent more energy efficient than its laser units and produce 85 percent less carbon dioxide. These statistics might not matter to individuals who occasionally print at home, but they provide businesses and nonprofit organizations with a way to cut down on their energy bills and carbon footprint.

Inkjets typically require fewer single-use resources, too. While laser printers rely on toner, fusers, developer, and other disposable parts, inkjets simply use an ink and waste ink box. Not only do inkjet printers produce nearly 60 percent less e-waste than their laser counterparts, but their production is a bit kinder to the environment as well: creating one toner cartridge requires burning anywhere from half a gallon to a full gallon of oil.

(Photo: DragonLord/Wikimedia Commons)

The decision to end all laser printer sales is likely a part of Epson’s “Environmental Vision 2050,” a circular economic model the company first committed to in 2018 and revised last year. Its biggest focus is Epson’s promise to become carbon-negative and “underground resource free” by 2050.

That said, inkjet printers aren’t the definitive solution to sustainable printing that Epson would like consumers to believe them to be. Inkjet cartridges dry out relatively quickly, resulting in some printer users buying more ink than they actually use. Inkjet printing costs more per page, too, which means the energy savings gleaned from ditching a laser printer might just be compensated for during use. Epson has also been in hot water recently for forcing some printer users to visit an authorized repair person to fix suddenly-bricked machines. Some Epson L360, L130, L220, L310, and L365 users even have to replace their machines altogether, which only puts more money in Epson’s pocket while producing seemingly unnecessary e-waste.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/Y1EFuHT

NEWS TECHNOLOGIE

Tesla has been promising its Full Self-Driving feature would be available “soon” for the last several years, and today might finally be the day. Tesla CEO Elon Musk has tweeted that the Full Self-Driving Beta is now live for anyone in North America who wants it — minus the most important feature. Of course, you need to have paid for Full Self-Driving in order to access it. Otherwise, you’ll be stuck with the lesser Autopilot features.

Full Self-Driving (FSD) mode has been in testing for years — most of those who bought the package with their cars have never even been able to use it. During the limited test, drivers had to log over 100 hours with Autopilot and hit a minimum driver safety score with Tesla’s integrated behavior tracking features. Only then would cars in North America unlock the Full Self-Driving beta. Why so much red tape? Because Full Self-Driving isn’t really what it sounds like. It is more capable than regular Autopilot, but you won’t be napping in the backseat.

Autopilot is one of the features that helped Tesla make its name among the more established automakers. All the company’s vehicles have basic Autopilot features like adaptive cruise control and lane-keeping — today, that’s not much of a differentiator, but a $6,000 upgrade adds Enhanced Autopilot to Tesla’s cars. This package includes Autopilot navigation on the highway, automatic lane changes, smart summon, and more.

At the top of Tesla’s self-driving pyramid is Full Self-Driving, which costs a whopping $15,000 upfront or $199 per month. This feature allows the car to see and react to traffic signals, and theoretically allows it to navigate autonomously on surface streets in addition to highways. However, that feature is still not available even in the beta. Still, Musk says anyone with the Full Self-Driving package in North America can opt into the beta now.

Tesla says that FSD is reliable, but safety advocates question that. Full Self-Driving is still just a “level 2” system, which means drivers are supposed to remain attentive at all times, but research has shown people using Autopilot spend less time looking at the road. It may be just good enough to make people think the car is driving itself. Some demonstrations also suggest pedestrian detection may be particularly bad at identifying children (and therefore stopping before hitting them).

Tesla is also facing increased scrutiny from regulators over the way it designs and markets its autonomous driving features. The National Highway Traffic Safety Administration is investigating a series of accidents in which Tesla vehicles in Autopilot mode collided with stationary emergency vehicles, and the Department of Justice has launched a parallel criminal investigation. Meanwhile, California is suing Tesla for misleading marketing around Full Self-Driving. It’s possible these cases could force changes to Full Self-Driving before the city street navigation feature debuts. Tesla has not given any indication of when that will be.

Now read:



from ExtremeTechExtremeTech https://ift.tt/Oh24fgz

NEWS TECHNOLOGIE

(Credit: PCMag)
Now that Nvidia has launched the RTX 4090 and 4080, it is desperately trying to clear the channel of older GPUs. The end of crypto mining and economic unease has resulted in a deluge of GPUs, often at bargain prices. At least, that’s been true for AMD, as Nvidia GPUs are still priced higher than expected. Still, Nvidia really needs to give people fewer options when it comes to shopping for GPUs. One of the ways it’s reportedly doing that is by ending production for two of its most popular series: the RTX 2060 and the GTX 1660.

Word of the impending shutdown of production on these mainstream GPUs comes from Chinese media( via TechSpot). Nvidia reportedly ended production on the RTX 2060 cards in early November. Now it’s adding the wallet-friendly GTX 1660 to the list as well. Both cards resonated with gamers seeking 1080p gaming on a budget. The RTX 2060 is currently the second most popular GPU in the Steam Hardware Survey. The GTX 1660 is the eighth. The RTX 2060 lineup includes several models: the OG RTX 2060 from 2019 with 6GB of RAM, the 12GB version from 2020, and the RTX 2060 Super with 8GB of VRAM. These GPUs range in price from around $170 to $400.

The Asus TUF GTX 1660, from an era where GPUs were tiny. (Credit: Asus)

The GTX 1660 was always a curiosity, as it was released to seemingly counter the bad press Nvidia’s RTX cards were getting. If you recall, the Turing line was the first to support ray tracing. However, very few titles supported it, and enabling it had a profound impact on performance. This seemingly caused Nvidia to release a Turing GPU without ray tracing, aka the GTX 1660. This line is comprised of three GPUs: The original 1660, the 1660 Ti, and the Super version, all with 6GB of VRAM. This is a true bang-for-the-buck GPU, with some models going for a smidge over $100 on eBay.

This is seemingly the latest attempt by Nvidia to clear the field for its upcoming RTX 4060. It also is trying to get rid of its existing RTX 3060 stock as well, so giving buyers fewer options could push people upwards in the GPU food chain. It’s unclear what kind of pricing the RTX 4060 will have, but if the past is precedent, it’ll be expensive. Nvidia has increased pricing significantly for its 40-series GPUs. Although that’s worked out fine for the flagship 4090, it’s not the case with the $1,200 RTX 4080. Buyers are seemingly fed up with what they see as price gouging on these high-end models.

Despite Nvidia’s efforts, these GPUs will still exist for some time, at least on eBay. Once they disappear, the market will see a dearth of affordable GPUs if Ampere is your only non-40-series option. The RTX 3050 is the most affordable card, and it’s almost $300. The RTX 3060 just goes up from there. It’s not a fantasy to envision the RTX 4060 costing $499 or something similar, either. It’s a bad situation that is seemingly only going to get worse—unless AMD can undercut Nvidia with its midrange cards the way it’s doing with its high-end RDNA3 GPUs.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/GbA1m4R

الاثنين، 28 نوفمبر 2022

NEWS TECHNOLOGIE

Google’s Project Zero team is on the front lines of digital security, analyzing code, reporting bugs, and generally making the internet safer. However, not every vulnerability gets fixed in a timely manner. A recent batch of serious flaws in Arm’s Mali GPU were reported by Project Zero and fixed by the manufacturer. However, smartphone vendors never implemented the patches, among them Google itself. So, that’s a little embarrassing.

The story starts in June 2022 when Project Zero researcher Maddie Stone gave a presentation on zero-day exploits — known vulnerabilities for which there is no available patch. The talk used a vulnerability identified as CVE-2021-39793 and the Pixel 6 as an example. This flaw allowed apps to access read-only memory pages, which can leak personal data. Following this, researcher Jann Horn started looking more closely at ARM Mali GPU code, finding five more vulnerabilities that could allow an attacker to bypass Android’s permission model and take control of the system.

Some of these issues were allegedly available for sale on hacking forums, making them especially important to patch. Project Zero reported the issues to ARM, which followed up with source code patches for vendors to implement. Project Zero waited another 30 days to disclose the flaws, which it did in August and mid-September 2022. Usually, this would be the end of the story, but Project Zero occasionally circles back to assess the functionality of fixes. In this case, the team found a “patch gap.”

Google believes the Mali issues it uncovered were already available in the zero-day market.

Although ARM released the patches over the summer, vendors hadn’t integrated them into their regular Android updates. The issues affect numerous devices that run a system-on-a-chip featuring a Mali GPU, including Android phones from Samsung, Xiaomi, Oppo, and Google. Snapdragon chips are spared as they use Qualcomm’s own Adreno GPU. So, Samsung phones in North America are safe, but those sold internationally with Exynos chips are at risk.

In past years, this might not have affected Google, but the company switched from Qualcomm to the custom Tensor chips for Pixel phones in 2021. Tensor uses a Mali GPU, so Google’s security team found flaws that the Pixel team failed to add to the regular software updates. Google is not alone in making this mistake, but it’s still not a great look. Google now says that the Mali patches will be added to Pixel phones “in the coming weeks.” Other vendors haven’t offered a timetable yet.

Now read:



from ExtremeTechExtremeTech https://ift.tt/imqfFJt

NEWS TECHNOLOGIE

(Photo: USGS)
For the first time in nearly four decades, the world’s largest active volcano—Hawaii’s Mokuʻāweoweo, or Mauna Loa—has begun to erupt.

The US Geological Survey (USGS) issued a red alert late Sunday night at the first sign of activity. Mauna Loa’s impact was confined to its summit at the time, precluding any immediate evacuations nearby. On Monday morning, lava was still overflowing from the volcano’s caldera. Though authorities weren’t immediately concerned for any downhill communities, they did express that ash from the eruption could float to and accumulate in nearby areas, presenting possible health and infrastructure concerns.

The USGS and the Hawaii County Civil Defense Agency first became wary of Mauna Loa’s impending eruption in September, when seismic activity near the volcano began to spike. The USGS shared that Mauna Loa was “in a state of heightened unrest” but assured the public that there were “no signs of an imminent eruption” at the time. Authorities prohibited backcountry hiking at Mauna Loa nonetheless. Just a month later, 36 small earthquakes occurred near the volcano’s base in just two days, extending Mauna Loa’s “heightened unrest” status.

Lava flow from Mauna Loa’s 1984 eruption. (Photo: National Park Service)

Mauna Loa—which is located just southwest of the Big Island’s center—last erupted in 1984. The eruption itself lasted 22 days, with lava flow stopping just four miles away from the nearby city of Kilo. For the first time, scientists were able to thoroughly monitor Mauna Loa’s lava flow, generating what are now undoubtedly priceless insights regarding this year’s eruption and its possible effects. Before 1984, Mauna Loa was estimated to have erupted approximately every six years; the 38-year gap between then and this year’s eruption bookends the volcano’s longest known period of quiescence.

This time around, the public has multiple tools at its disposal through which to monitor Mauna Loa’s activity. Not only is the USGS delivering real-time updates through its Twitter page and its Mauna Loa web page, but people from around the world can view the eruption through the USGS live webcam, which sits on the volcano’s northwest rim.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/YUJOVsZ

NEWS TECHNOLOGIE

(Photo: Jian-Cheng Lai, Bao Research Group/Stanford University)
Chronic wounds are an under-acknowledged medical concern. At any given time, more than 600,000 Americans are thought to experience physiologically-stunted wounds that won’t heal. Chronic wounds aren’t just inconvenient and painful; they also rack up individual healthcare costs and prevent people from engaging in certain activities, resulting in a decreased quality of life.

Thanks to new research, this might not always be the case. A team of scientists at Stanford University has developed a wireless “smart bandage” that simultaneously monitors wound repair and helps to speed up healing. The bandage could shorten the time people suffer from chronic wounds while mitigating the physical damage and discomfort caused by conventional healing methods.

In a study published last week in Nature Biotechnology, the scientists describe a flexible, closed-loop device that seals wounds while transmitting valuable biodata to an individual’s smartphone. Hydrogel makes up the bandage’s base: While conventional bandages tug and tear at the skin when they’re pulled away, hydrogel allows the smart bandage to attach securely without causing secondary damage during removal. On top of the hydrogel sits the electronic layer responsible for wound observation and healing. At just 100 microns thick, this layer contains a microcontroller unit (MCU), electrical stimulator, radio antenna, memory, and a series of biosensors.

(Image: Jian-Cheng Lai, Bao Research Group/Stanford University)

The biosensors look for two types of information: changes in electrical impedance and temperature fluctuations. Impedance is known to increase as wounds heal, while temperatures drop during wound resolution. Real-time insights regarding both of these indicators can inform the smart bandage’s repair-accelerating function, which utilizes electrical stimulation to encourage the activation of pro-regenerative genes. One of these genes, Apolipoprotein E, boosts muscle and soft tissue growth, while Selenoprotein P reduces inflammation while helping to clear out pathogens.

When tested on mice, the smart bandage’s stimulation indeed promoted the activation of both genes while increasing the number of white blood cells in each test subject. Mice that received treatment via smart bandage healed 25 percent faster than control mice. Treated mice also experienced a 50 percent enhancement in dermal remodeling, suggesting an improved quality of treatment and physical resolution.

As of now, the scientists’ smart bandage is just a prototype. The team hopes to scale the device’s size to fit humans while finding ways to reduce cost. There also might be a case for adding additional biosensors that track pH, metabolites, and other data. Still, the bandage presents a bit of hope for those who struggle to heal from persistent, life-disrupting wounds.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/qPOyfIT

NEWS TECHNOLOGIE

(Photo: Nekton Mission)
We’ve long thought Antarctica to be relatively free from human influence, thanks to its extreme climate, general lack of human presence, and distance from inhabited land. Unfortunately, what was once considered the last “pristine” wilderness might no longer be. An Antarctic research expedition has found microplastics in the continent’s water, air, and sediment, suggesting a level of pollution far higher than expected.

Nekton, a nonprofit ocean research initiative, partnered with forensic scientists at the University of Oxford to study microplastic pollution in the Weddell Sea—one of the Antarctic’s most remote regions. During an expedition in 2019, scientists gathered samples of the Weddell Sea’s air, subsurface seawater, sea ice, and benthic (underwater) sediment. The team used a polarized light microscope to examine each of the 82 samples for microplastics.

Every single sample contained some form of microplastic pollution. Polyester fibers, which are most often used in the production of synthetic textiles, were by far the most ubiquitous with a presence in 60 percent of samples. Most other pollutants were determined to be nylon, polypropylene, and acrylic fragments of varying shapes and colors. Although the team believes some of these originate from nearby research vessels or from fishing gear used by fleets in the neighboring Scotia Sea, most microplastics appear to arrive by unexpected means: the wind.

Polarized light microscopy image of a polyester textile fiber from one of the team’s samples. (Photo: Nekton Mission)

One might expect seawater to contain the highest microplastic diversity, but a wider spread of microplastic categories was found in the team’s Weddell Sea air samples than anywhere else. Most microplastics appear to float in from South America. Once they arrive in the Antarctic, they’re typically there for good; as a result, the Weddell Sea acts as a sink for plastic particles from a whole other continent.

The expedition’s forensic results challenge the longstanding assumption that the Antarctic Circumpolar Current (ACC), a deep, eastward-flowing current with an associated polar front, isolates most of Antarctica from pollution affecting the rest of the ocean. Some scientists have theorized that the ACC would protect the Weddell Sea and other regions from the increasingly dire issue of microplastic pollution, but it’s now clear this isn’t the case. Worse, the expedition’s findings appear to suggest that microplastics that cross the ACC can get “stuck” there, creating the potential for buildup over time.

The news of Antarctica’s surprising plastic pollution levels couldn’t have come at a more impactful time. This week, more than 150 nations’ environmental representatives are meeting in Uruguay to form an international, legally binding text on plastic pollution. The text will help coordinate a global response to microplastics’ rapidly-increasing presence in food, water, and (as this study has shown) even the air we breathe.

“The results shown here from a remote, arguably near-pristine system, further highlight the need for a global response to the plastic pollution crisis” to conserve marine systems, the researchers for Frontiers in Marine Science wrote.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/Nj15zry

NEWS TECHNOLOGIE

Today, humans build robots, but in the future, robots could be programmed to build more of themselves. Researchers at MIT’s Center for Bits and Atoms (CBA) have created robotic subunits called “voxels” that can self-assemble into a rudimentary robot, and then collect more voxels to assemble larger structures or even more robots.

The researchers, led by CBA Director Neil Gershenfeld, concede that we’re still years away from a true self-replicating robot, but the work with voxels is answering some vital questions that will help us get there. For one, the team has shown that it’s feasible to make the assembler bot and the structural components of whatever you’re building can be made of the same subunits — in this case, voxels.

Each robot consists of several voxels connected end-to-end. They use small but powerful magnets to latch onto additional subunits, which they can use to assemble new objects or make themselves larger. Eventually, a human operator might simply be able to tell these self-assembling robots what they want to be built, allowing the machines to figure out the specifics.

For example, if one robot isn’t enough to build the required structure, it can make a copy of itself from the same voxel components to split the work. When building something large, the robots could also decide to make themselves bigger and thus more efficient for the task. It could also be necessary for large robots to split into smaller ones for more detailed work.

The voxels (a term borrowed from 3D modeling) are based on components developed for previous MIT experiments. However, those voxels were simply structural pieces. The voxels used in the new research have been enhanced with the ability to share power and data when connected. The add-on voxel components don’t have any moving parts, though. All the movement and smarts come from the base units, which are like feet that allow the robot to inch along the magnet-studded substrate.

A large part of this research is simply refining the algorithms that govern how the robots grow and replicate, ensuring they can work together without crashing into each other. Although computer simulation shows the system could build larger objects (and more robots), the current hardware is limited. The magnetic connections are only strong enough to support a few voxels, but the team is developing stronger connectors that will allow one robot to build another. By tweaking the strength of the actuator force and joint strength, the researchers believe they can build on this success.

Now read:



from ExtremeTechExtremeTech https://ift.tt/p8EBZdu

الجمعة، 25 نوفمبر 2022

NEWS TECHNOLOGIE

(Photo: Astro Alex/Wikimedia Commons)
We’re already aware of the consequences of unmitigated carbon emissions, particularly as they relate to climate change here on Earth. But according to two new studies, greenhouse gas buildup affects more than just our immediate surroundings. Researchers have found that rising carbon dioxide levels are causing Earth’s upper atmosphere to contract, which could have serious implications for future space operations.

A team of environmental scientists, atmospheric chemists, and space physicists analyzed data from NASA’s Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) mission, which launched in 2001. The TIMED satellite’s Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument has been gathering insights on atmospheric infrared radiation, or heat, since just a year after launch. These nearly two decades of insights are what the researchers used to assess how building CO2 levels have impacted the atmosphere over time.

(Image: Creative Commons Attribution-Share Alike 4.0)

According to the dual studies they published in the Journal of Geophysical Research: Atmospheres, the researchers found that Earth’s upper atmosphere is indeed contracting—something scientists have suspected for a while, but that’s been difficult to confirm. In the lower atmosphere, CO2 absorbs and re-emits heat in all directions, creating a warming effect. But in the upper atmosphere, CO2 is allowed to escape into space, resulting in a gradual cooldown instead. This cooling effect causes not only the stratosphere to contract but also the mesosphere and thermosphere (together referred to as the MLT).

The MLT contracted by 1,333 meters in just under 20 years. The researchers estimate that approximately 342 of those were a direct result of CO2 cooling. MLT cooling negatively correlates with atmospheric drag; as the MLT grows colder, atmospheric drag drops. Given atmospheric drag is essential to ships’ and satellites’ ability to deorbit, unabated carbon buildup could end up impacting future (or even current long-term) missions. This includes the increasingly necessary task of removing space debris.

The team believes CO2-related MLT cooling could affect the larger aerospace industry sooner than we think. “As long as carbon dioxide increases at about the same rate, we can expect these rates of temperature change to stay about constant too,” they write. “It seems likely that ongoing changes in space climate will become important issues in space law, space policy, and in the business of underwriting insurance for endeavors in space.”

Now Read:



from ExtremeTechExtremeTech https://ift.tt/v9ZIHdP

الخميس، 24 نوفمبر 2022

NEWS TECHNOLOGIE

(Photo: Olga DeLawrence/Unsplash)
Filing your taxes is already stressful enough without the worry that your data will end up in the wrong hands. Thanks to several tax websites’ newly-discovered data sharing practices, this concern is likely to be prevalent during the 2023 tax filing season. H&R Block, TaxAct, and TaxSlayer have been found to send users’ financial data to none other than Facebook.

The three websites—which together help more than 25 million Americans file their taxes annually—use Meta’s JavaScript code (called “Meta Pixel”) to capture user data and send it Facebook’s way, according to the nonprofit tech investigations newsletter The Markup. H&R Block, one of the country’s most recognizable tax filing firms, was found using Meta Pixel to obtain users’ health savings account usage data as well as dependents’ college expense information. TaxAct was caught using the code to track users’ filing status, dependents, adjusted gross income, and refund totals. TaxAct appears to have lazily attempted to anonymize this data by scrambling dependent names and rounding income and refunds to the nearest thousand and hundred respectively; however, The Markup found the former obfuscation to be easily reversible.

(Photo: H&R Block)

TaxSlayer appears to have used Meta Pixel to capture the most detailed user information. Using a “Meta Pixel Inspector” it developed earlier this year, The Markup found that TaxSlayer habitually gathered users’ names, phone numbers, and dependent names. A specific form of TaxSlayer incorporated into personal finance celebrity Dave Ramsey’s websites also obtained users’ income and refund totals. When The Markup asked Ramsey Solutions about its use of Meta Pixel, the company said it hadn’t known about the code’s data-grabbing element and allegedly removed it from its sites. TaxAct similarly stopped capturing users’ financial data but continued to record dependents’ names.

But why? What incentive does Facebook have to grab Americans’ tax information? As it nearly always does, the answer comes down to money. Meta, Facebook’s parent company, regularly uses its approximately 2 million Meta Pixels to capture web users’ browsing activity, demographic data, and more. This information is then used to ensure Facebook and Instagram users are seeing ads they might actually click, thus supporting Meta’s lucrative marketing operations.

The IRS, having been made aware of the tax websites’ Meta Pixel usage, could render Facebook’s tax data harvesting financially useless. Websites that share users’ tax information without consent can face steep fines, and as of Tuesday, H&R Block, TaxAct, and TaxSlayer lacked the disclosures necessary to claim consent.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/FYfOmN6

الأربعاء، 23 نوفمبر 2022

NEWS TECHNOLOGIE

(Image: Windows)
A few too many iCloud for Windows users are reporting an odd issue with their cloud-based photo storage: They’re receiving pictures of random strangers’ families. Over the last week, several users have posted to the MacRumors Forums complaining of eerie situations in which their photos have been swapped for someone else’s.

The debacle started Thursday, when a user posted to the iCloud forum that their videos had been corrupted and were displaying black screens with scan lines. On some occasions, videos reportedly swapped in stills from outside of the user’s own iCloud account. “I’ve been shown photos of other people’s families I’ve never seen in my life, soccer games, and other random photos,” the user wrote. “Obviously this is extremely concerning and does not exactly make me feel safe using iCloud.”

The user said they’d been able to replicate the issue on three total PCs, two of which ran Windows 11 Pro while the other used Windows 10 Pro. Within just a few days, other users commented on the post to report similar issues. Their original photos—the ones they intended to view on iCloud for Windows—had been taken with iPhone 13 Pro and iPhone 14 Pro models and were similarly missing. In their place were photos of other people’s children, as well as of other innocuous things.

iCloud for Windows isn’t just home to photos and videos, which only makes Apple’s alleged security breach even worse. (Image: Windows)

In a movie, this would be a big, masterful prank. But this is real life, and users are concerned for their security and their families’. An innocent family photo or a snapshot of last night’s dinner isn’t going to change your life by ending up on someone else’s screen, but the implications of iCloud for Windows allegedly redistributing users’ files are massive. It only takes one misplaced private photo to threaten someone’s personal or professional life; if this security blunder is in fact real, Apple might be facing a nasty lawsuit in the coming months.

That could especially be the case given Apple’s reported response to users’ concerns. A few people who shared their iCloud for Windows woes on MacRumors said they attempted to reach out to Apple but were quickly dismissed. For a company so insistent that its security is superior to competitors’, that doesn’t bode well.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/sAYUzH4

NEWS TECHNOLOGIE

(Photo: Mercedes-Benz)
Remember when luxury automakers—or, ahem, one in particular—began charging customers extra to enjoy features their vehicles already had? Well, Mercedes-Benz is joining the club. The company has begun offering an “acceleration increase” subscription to electric vehicle owners for the cool price of $1,200 per year.

The subscription package applies only to the Mercedes-EQ EQE and EQS all-electric models. The EQE, which will be available both as a sedan and an SUV come 2023, will receive a maximum motor output boost of 45 kW for a total of 260 kW. The EQE 350 sedan will go from zero to 60 in 5.1 seconds (up from 6.0) while the EQE 350 SUV will take about 5.2 seconds (up from 6.2). The EQS, which will also come as a sedan and an SUV next year, will get an output increase of 65 kW for a total of 330 kW. The EQS 450 and EQS 450 SUV will reach 60 miles per hour in 4.5 and 4.9 seconds respectively (up from 5.3 and 5.8).

The Acceleration Increase package represents Mercedes’ step into a controversial practice: charging customers on a subscription basis for features their cars already possess. BMW kicked off the trend earlier this year when it added heated steering wheels, high beam assistant, and other features to its South Korea, Germany, New Zealand, South Africa, and UK subscription sales pages. Tesla announced shortly after that it would begin requiring customers to pay for navigation following the first eight years of ownership.

(Image: Mercedes-Benz)

Outside of automakers’ corporate offices, few people appear to actually like this model. Following Tesla’s announcement, two New Jersey legislators introduced a bill that would make it illegal for manufacturers to offer subscriptions on pre-installed hardware. We’re also not the only news outlet that immediately scorned BMW for charging drivers $18 per month to use their own heated seats. People, it seems, would like to have free access to the technology they’ve already spent thousands of dollars on.

In Mercedes’ case (and likely in others we’ve mentioned) it doesn’t appear as though dealers or mechanics have to actually get their hands on a car to provide a subscription-based add-on. Instead, extra torque is unlocked via over-the-air updates, which magically make it possible for someone’s $70,000-plus vehicle to accelerate as fast as it technically already could.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/j8eCgxJ

NEWS TECHNOLOGIE

What a difference a year can make. It wasn’t long ago that you would have been lucky to spot a high-end GPU in stock someplace, and it would have most likely been priced well above MSRP. Now, the latest Nvidia RTX 4080 is reportedly piling up at the same retailers that ran through RTX 4090 stock the instant it was released. The RTX 4080 is cheaper but probably a much worse value in the enthusiast GPU market.

The RTX 4090 was the first of the new RTX 40-series to launch, and it saw robust sales even at the obscene $1,600 asking price. People lined up to get the new GPUs, and units have been hard to come by ever since. Perhaps Nvidia expected a similar showing for the $1,200 RTX 4080, but it’s not getting it.

If you want a 4080, you won’t have to spend a long time looking. In the US, retailers like Microcenter have stacks of the cards on shelves, and buyers have even had plenty of opportunities to buy the GPUs online. Newegg, for example, had units available for days before they sold out. It’s not much different in other markets. According to Hardware Canucks, the RTX 4080 is the first Nvidia release in more than two years that hasn’t sold out immediately at Canadian retailers. It’s the same story in the UK and Germany, too.

There are a few factors at play here, most notably Nvidia’s pricing for the RTX 40-series. Before release, Nvidia warned of higher GPU prices. The RTX 4090 certainly is expensive, with some variants creeping up to $2,000. However, this is the top-of-the-line card, and some enthusiasts will always look for the most powerful GPU when upgrading — that’s what Nvidia’s XX90 cards are all about. Nvidia has reportedly shipped 130,000 RTX 4090s and only 30,000 4080s.

Spending more than $1,000 on the second banana doesn’t attract the same level of interest, though. The RTX 4080 represents a significant price increase over the RTX 3080, which launched at $700, making the 4080 almost twice as expensive. That would have been considered a steal for most of the past several years, but crypto miners are no longer siphoning up every high-end GPU that rolls off the assembly line.

There’s also the issue of the remaining RTX 30-series cards. Many PC gamers held off on the 30-series because it was impossible to find them for a reasonable price. Now, they too are sitting on store shelves, and at much lower prices than the new 40-series. For those who don’t need the best of the best, the 4080 doesn’t offer enough of an improvement over last-gen cards to justify its price tag. Some gamers are probably also waiting to see what AMD can offer with the upcoming Radeon RX 7900.

Now read:



from ExtremeTechExtremeTech https://ift.tt/Suz0cAJ