الاثنين، 28 فبراير 2022

NEWS TECHNOLOGIE

Google has long relied on the network pings from Android phones to power the robust traffic monitoring features of Maps, but this data is no longer available in Ukraine. Vice says that Google has disabled the feature following reports that Maps could reveal the movement of troops and civilians as Russian forces continue their assault on Ukraine. 

In some places, like the US and Canada, the iPhone is more popular than Android. That’s not the case in Ukraine and Russia, where about three in four phones run Android. Any Android phone that has location services enabled will report back to the Google mothership, and the aggregate of that data helps to show traffic congestion in Maps. Green lines mean traffic is flowing freely, and yellow or red lines indicate places where it’s slowing. 

Over the past few days, as Russia advances toward the capital Kyiv, several commentators have noted that late-night “traffic jams” on Google Maps most likely indicate large troop movements. For example, the road between Belgorod, Russia and the Ukrainian border lit up red on February 23 as Russia was staging its attack. The soldiers themselves are probably not carrying active smartphones, but anyplace civilians are delayed by troop movements will be interpreted by Google’s servers as a traffic jam. 

Google has not clarified when it disabled the feature, nor if there was some specific event that prompted the move. It may have even been asked by Ukraine or the US to disable the feature. The ability to track concentrations of civilians with an open access tool like Maps could be very advantageous to an invading army. This is of even greater concern as reports claim Russian forces have started shelling civilian areas in Kharkiv. Ukrainian authorities have started dismantling road signs in hopes of slowing Russian troop movements as well. 

Google gets this same data from everyone using an Android phone, and Apple does the same with Apple Maps on the iPhone. However, with so many fewer iPhones in Ukraine, Google Maps is more likely to display useful data. If you don’t want to contribute this data to Google Maps, you can disable it in the system settings under the Location menu (on most phones). Turn the toggle off, and your location will not be reported to any Google service. Keep in mind, some apps won’t work correctly without location access.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/NIYLis4

NEWS TECHNOLOGIE

On Friday of last week news broke that someone had infiltrated Nvidia’s network, though at the time it wasn’t clear what they were after or if it was somehow related to Russia’s invasion of Ukraine. It now appears the Nvidia hackers were after a very obvious target: the code behind Nvidia’s Low Hash Rate (LHR) limiter, which reduces GeForce GPU performance when mining cryptocurrency.

As we reported last week, the nefarious activity “completely compromised” some of Nvidia’s internal systems, causing the company to lose access to its email system for several days. Nvidia also shut down some of its own systems to prevent the spread of malware and to allow it to investigate. New information has come to light that indicates Nvidia had a pretty busy weekend doing battle with the hackers, which are apparently a group named Lapsus$, according to an account by Wccftech. The group reportedly was able to snag a terabyte of data from Nvidia’s servers, which included drivers, firmware, documentation, and developer SDKs. There were reports over the weekend that Nvidia tried to hack the group back by encrypting the files it had managed to steal, but the group was thinking ahead and had made a backup, so Nvidia’s efforts may have been fruitless.

A message from the hackers to Nvidia. (Image: @cyberknow20 on Twitter)

The hackers made several announcements on Telegram, the encrypted messaging and chat service, which were then posted to Twitter. One tweet noted the hackers said they were releasing “part one” of the files they had exfiltrated from Nvidia. Readers interested in poking around in such depositories should be wary. Last week, a hacker posted what he claimed was a workaround for Nvidia’s LHR limiter. Once people began poking around in it, they discovered it was malware.

LAPSUS$ was able to use the stolen files to create a workaround for Nvidia’s LHR limiter, which it is already selling it the black market. The limiter bypass would allow Nvidia RTX 30-series cards to mine crypto currency at full-speed, instead of being limited by Nvidia to discourage miners from buying all its GPUs. The hackers are now demanding Nvidia remove the LHR from all its 30-series GPUs, or else it will release even more data that it has stolen from the company. It could also theoretically release the LHR workaround to the public. Nvidia will obviously never take this course of action.

Even worse, the company claims it gained access to Nvidia employee information, including every employee’s password hash. Bleeping Computer posted the notification on Twitter, but has not verified it as of press time. Password hashes are not the same thing as the actual password, but obviously this is not something Nvidia wants out in the wild.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/O9vaVL0

NEWS TECHNOLOGIE

(Photo: Bill Ingalls)
The results of a tweet-turned-stock selloff have prompted the Securities and Exchange Commission to launch an investigation targeting Elon Musk and his brother, Kimbal Musk.

The Tesla CEO and his restaurateur sibling are suspected of having violated insider trading rules by conspiring to offload billions of dollars worth of stock, according to a source for The Washington Post. In November, Elon Musk asked his Twitter followers whether he should sell ten percent of his Tesla stock to reduce the amount of tax he’d owe on unrealized capital gains. Elon Musk promised to abide by the results of the poll regardless of the outcome. Just days after the poll results revealed that 58 percent had voted in favor of his proposal, Elon Musk began selling billions of dollars worth of stock.

But the timing of Elon Musk’s Twitter stunt raised a red flag to the SEC: Kimbal Musk had sold $108 million in Tesla stock just a day before. As Elon Musk’s Twitter poll gained traction, Tesla’s shares took a nosedive, suggesting many who had seen the poll had viewed it as a not-so-subtle sign to sell their own stock. 

Given the suspicious succession of these events, the SEC is interested in finding out whether Elon Musk told his brother about his upcoming tweet ahead of time. If Kimbal Musk knew about the poll before it ended up on Twitter, the siblings’ communication could be deemed a violation of insider trading rules that prohibit the purchase or sale of shares when the shareholder possesses undisclosed material information. Elon Musk claims his brother did not know about the Twitter poll before it was posted, but that Tesla’s legal team was made aware in advance.

Due to the fact that Elon Musk could argue only he (versus his company) knew about or controlled his intent to sell, it’s currently unclear which Musk brother would face consequences for his shifty trades. The SEC could also close the investigation without taking any enforcement action at all. Whether the SEC gets to that point, though, will depend on the results of the probe—a process Elon Musk is likely to pester as much as he can, if previous occurrences are anything to go off of. 

Now Read:



from ExtremeTechExtremeTech https://ift.tt/8i6Lpeq

NEWS TECHNOLOGIE

The Valve Steam Deck is now available, well, technically. The first units are shipping in the coming days, and reviews are out. You might feel more inclined to order one after reading the generally positive reviews, but don’t get your hopes up. The shipping queue is going to stretch into the middle of the year. 

Valve’s new handheld benefits from its x86 architecture, which opens up a huge library of games. However, is no lightweight at 1.47 pounds — it makes the Switch look positively tiny by comparison. The AMD hardware is sufficient for desktop-class games like God of War, Horizon Zero Dawn, and Dark Souls III. All those titles are certified to work on the Steam Deck, but reviewers say most recent games run just fine with some tweaks to settings and controls. If you’re playing something a little less demanding, like the also Deck-verified hit Hades, you can just turn down the performance to roughly double the battery life. 

The Steam Deck is not the first device that attempts to bring PC gaming to a handheld, but most reviews agree that it’s the best one so far. Valve paid a lot of attention to the experience, allowing the Steam Deck to compensate for some of its unavoidable shortcomings. For example, the battery life is middling, to say the least. Reviews claim you’ll get under two hours of gameplay with the hardware maxed, but it only takes a few taps to adjust the CPU clock and frame rate. Doing that can also reduce fan noise, which is an almost omnipresent annoyance while gaming on the Deck. If your battery is running low, no problem. You can just use the Steam OS suspend function to freeze everything in place while you recharge. When you do juice up, you can use a standard high-wattage USB-C charger. 

Steam OS on the Deck has some clever features to make the handheld gaming experience less frustrating, but the software is still far from “done.” This was one of the primary negatives cited by The Verge, which gave the Steam Deck a 6.5/10. PC Gamer was more kind with an 85/100, praising the customization and interface while complaining about the weight. PCMag says the Steam Deck is “the most compelling mainstream hardware Valve has ever made, and the most exciting non-Nintendo handheld since people stopped pretending the PlayStation Vita had a future.”

People like the Steam Deck, sure, but it’s expensive and limited. It looks and feels like a clunky first-gen device, but the core experience works. The Steam Deck makes your PC games more portable than they’ve ever been — unless you count cloud gaming. Microsoft, Amazon, and others are going all-in with streaming games from a server to devices, which doesn’t require all this heavy, battery-draining hardware. However, you need ultra-fast connectivity that’s still hard to come by. 

If a beefy gamer-friendly handheld is more your speed, you can register your interest in buying a Steam Deck now. Valve’s current timeline calls for new orders to begin shipping sometime after Q2 2022.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/ZobEsLr

NEWS TECHNOLOGIE

(Photo: USPTO)
A recent patent filing by Apple suggests the company is working on some kind of full-blown Mac computer that could be tucked inside one of its Magic keyboards. The design could allow a person to simply unplug the keyboard from an external display, stuff it in their rucksack, and be on their way as opposed to hauling a notebook around. The patent filing is unambiguously titled, “Computer in an input device.”

The lengthy patent filing, which was unearthed by Macrumors, is quite clear. It states: “The computing devices described herein can incorporate or otherwise house one or more computing components within an input device to provide a portable desktop computing experience at any location having one or more computer monitors. For example, a user can transport a keyboard that houses a computer, as opposed to carrying an entire laptop or a tower and keyboard.” This all makes sense of course, but who the heck is carrying a tower PC between locations?

Apple’s “PC in a keyboard” concept…which looks like a keyboard. (Image: USPTO)

It’s possible this device couldn’t be built while Apple was stuck on Intel silicon but that the company is evaluating new form factors now that it has the M1 in-market. The MacBook Air doesn’t even include a fan, and as long as it’s not being used to encode video or do heavy rendering work, the CPU mostly stays cool all the time. That said, Apple does mention a cooling mechanism that could be included in the device. To enable passive ventilation, the keyboard would include a pathway for air to enter, flow through, and exit again.

Apple isn’t sure passive ventilation would be enough to keep the system cool, however. The patent adds that Apple might need to include “one or more more air-moving apparatuses” inside the chassis to help the air move along the intended pathway. Additionally, Apple describes a design with a “thermally conductive” base material, with the CPU in “thermal communication” with it, so as to allow the base to absorb its heat and radiate it by spreading it across a larger area. Apple lists “non-limiting thermally conductive” materials as copper, bronze, steel, aluminum, and brass. As we all know, Apple already uses aluminum in its wireless keyboards, so the nuts and bolts of the design are already somewhat in place. We also really hope Apple makes a bronze keyboard.

This filing by Apple reminds us of a similar device Asus attempted to bring to market about a decade ago when its EeePC branded netbooks were all the rage. You can take a look at it here via Cnet, but the device ultimately failed due to being overpriced at $500 and underpowered. Like other netbooks of the era it had a weak Atom CPU, 1GB of system memory, and 16GB or 32GB of storage. It’s amazing we liked those device so much. (I hated netbooks – Ed).

The Asus Eee Keyboard, circa 2010. (Image: Cnet)

Apple would not be limited to such low-end specs in 2022 and such a system could plausibly offer similar to performance to an M1-powered MacBook Air. Though those devices are limited to a maximum of 16GB of memory, that’s more than enough for users who mostly use a web browser and an office suite to get work done. It would also jive with the company’s reported focus on its Mac desktops for 2022, as it’s rumored to be bringing four new models to market this year. Finally, it should be noted Apple patents all kinds of things that may never see the light of day, such as an all-glass Mac tower, so take this filing with a grain of salt.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/69ftki4

الجمعة، 25 فبراير 2022

NEWS TECHNOLOGIE

Credit: Nvidia

Nvidia is reportedly in the middle of dealing with some kind of cyber attack that has lasted two days so far, according to various media sources as well as the company itself. Reports indicate the attack, which is still being investigated, has taken out the company’s email systems and its developer tools, but it’s unclear if other components of the company’s business were similarly affected by the intrusion into its networks. It’s also not clear if the situation poses a threat to any of Nvidia’s partners.

According to The Telegraph, one company insider said the unwelcome intrusion has “completely compromised” the company’s internal systems, but that portions of its email service had resumed operation as of Friday, the 25th. An Nvidia spokesperson confirmed the unfortunate event, but offered few details. “We are investigating an incident. We don’t have any additional information to share at this time,” the spokesperson said. It’s unclear at press time whether any data was stolen from Nvidia’s servers, or to what extent any damage was caused. Reports indicate the company hasn’t yet discovered who the perpetrator is, but the Telegraph reports that the company hadn’t alerted any of its partners to the intrusion as of press time. However, Nvidia has many partners so it’s doubtful the paper could have queried all of them.

Given the timing of the attack, it certainly raises questions about if it’s at all tied to the recent Russian aggression in Ukraine as the cyber attack began at roughly the exact same time as the Russian incursion into Ukraine. Shortly thereafter, the US announced major sanctions against Russia in retaliation for its actions, so it’s possible that hackers friendly with Russian interests could be counter-attacking, and a huge and important company like Nvidia would certainly be a juicy target. However, several days ago the Secretary of the Department of Homeland Security, Alejandro Mayorkas, said the US doesn’t know of any specific and credible threats targeting US companies at this time, but that companies should be prepared just in case. It is also possible that an country or organization with no particular ties to the Ukraine / Russia issue has chosen this time to make a move.

According to Wccftech, the impact of the hack was severe enough that Nvidia had to take several of its internal systems offline to prevent further intrusions, or the spread of malware throughout its network. It’s a situation that’s reminiscent of the Solar Winds hack from 2017, where hackers infected the software update mechanism of the company, allowing malware to spread to its customers, including many US government institutions.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/H0uPq2W

NEWS TECHNOLOGIE

A render of Webb's final configuration.

Where once there were 18, now there’s just one. The James Webb Space Telescope is currently hanging in space at the Earth-Sun L2 Lagrange point, where it will remain for the remainder of its mission. Before it can get around to doing any science, NASA has to calibrate the instruments and adjust the 18 mirror segments. Initially, every segment created its own image, but NASA reports it has finished calibrating the James Webb Space Telescope’s mirrors so they all focus on the same location

The James Webb Space Telescope (JWST) is the successor to Hubble, which has been operating now for over 30 years. As the aging observatory runs low on redundant systems, Webb is brand new, and its textbook launch means it could operate for as long as 20 years. Its position about a million miles away allows the telescope’s instruments to remain extremely cold, which is ideal for the mid-infrared observations that will set the JWST apart. Getting this far was a nerve-wracking process for a world that has watched the telescope take shape over the last 20 years — it was folded up inside an Ariane 5 rocket, and there were hundreds of potential points of failure in the origami-like unfolding process. 

First, the team performed Segment Alignment. This overlapped light from the various mirrors (as in the GIF above)

The last element of the deployment is to get the primary mirror configured, which is why there’s just one image of the star HD 84406 below instead of 18. Unlike Hubble, which had a single parabolic mirror, Webb has a Korsch-style reflector made up of adjustable segments. The first image from Webb showed the guide star 18 times with no rhyme or reason. Then, NASA identified each mirror segment’s position and began nudging them in the right direction. Through this process, the image from each mirror is directed to the same location on the sensor, producing a single stacked image. 

Segment Alignment (left) resolves into finished image stacking (right).

This is not the end of the calibration process, though. The segments are now in the right approximate orientation, but close enough only counts in horseshoes and hand grenades. Astronomy, not so much. In its current state, Webb’s primary mirror is acting as 18 small telescopes instead of one large one. Next, the team will begin tweaking each segment, aligning them until the variation is smaller than the wavelength of light. This process is called Coarse Phasing, and it’s the fourth of seven steps to get the observatory’s mirror ready. 

So far, everything has gone swimmingly, a welcome departure from Webb’s time on the ground when it seemed like everything that could go wrong did. With all the delays, Webb ended up costing $10 billion. If the rest of the mission goes as well as deployment, it will be more than worth the cost.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/FhHrcKp

NEWS TECHNOLOGIE

After a roaring 2021 in which AMD and Intel saw record revenue and profits, AMD has finally pipped Intel in overall market cap, making it the more valuable company despite being dwarfed by Chipzilla’s sprawling stature. Although market caps due fluctuate based on a company’s stock price, AMD actually pulled this feat off twice in the past week, climbing to a value of $188 billion compared to Intel’s $182 billion. “AMD worth more than Intel” isn’t a headline we expected to write, even briefly, but here we are.

The news of AMD’s sudden surge in valuation comes via Yahoo Finance, which bases the company’s rise on several recent developments. Possibly the most valuable is the news that AMD finally completed its $35 billion purchase of semiconductor Xilinx, which is the largest acquisition in its history. The acquisition is expected to bolster AMD’s portfolio of cloud and data center products going forward. Of course, Intel also had an acquisition of its own recently, as it announced it was purchasing Tower Semiconductor for $5.4 billion in an effort to add new capacity and services to its Intel Foundry Services (IFS) division, but the merger will take some time to resolve.

AMD / Xilinx merger. Image by AMD.

Perhaps the biggest reason is just that AMD has been posting better numbers for quite some time, despite those numbers being smaller than Intel’s due to their size discrepancy. As we reported previously, AMD finished 2021 with a 68 percent increase in revenue year-over-year, while Intel only boosted revenue last year by one percent, though it did earn record revenue.

According to Yahoo, in 2022 this trend might continue as AMD has projected a 31 percent growth in revenue, while Intel is anticipating a more meager two percent increase. Intel is also busy dumping truckloads of cash into new projects around the world, such as its new $20 billion fab in Ohio, as it tries to address the surging demand for semiconductors. These expansion plans will likely cause Intel to incur a $36 billion penalty in the near term as it begins to ramp up its production capacity with the hopes of avoiding a similar crisis in the future.

Overall 2022 will be a pivotal year for both companies, as the competition between them is already red hot. Intel has already begun to reclaim the CPU performance crown from AMD with its Alder Lake desktop CPUs, and now the companies are set to do battle with their latest mobile platforms as well, which both companies are in the process of launching currently. On the desktop side of the equation, Intel is already apparently cutting into AMD’s market share according to at least one source. That source, Passmark, shows that although the company’s are comparatively even in the desktop market, it’s not even close in the laptop market as Intel holds around 77 percent of it, compared to AMD’s 23 percent. AMD’s new 6000 series mobile chips have just launched so it’s too early to tell what kind of a threat they might pose to Intel, but early reviews show AMD has prioritized power efficiency over raw horsepower.

Though AMD and Intel battling in the laptop and desktop space is nothing new, the companies will also be clashing in a big way in the server arena in 2022, as AMD preps its highly-anticipated Milan-X CPUs and upcoming Zen 4 Genoa to do battle with Intel’s upcoming Sapphire Rapids chips. This market is critical to both companies for many reasons, and Intel is hoping its next-generation CPUs will be able to allow it to take back the server CPU performance crown from AMD in the same way it has on the desktop with Alder Lake. According to recently leaked benchmarks comparing Milan-X to Sapphire Rapids, it looks like AMD is going to have its hands full in 2022 despite its rosy financial outlook.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/8sZQMB6

NEWS TECHNOLOGIE

(Image: Valve)
Today is a special day for those who were first in line to officially order their Steam Decks. After a seven-month wait, early adopters who put down a $5 deposit back in July can complete their purchase and expect to receive their new handheld console in early March. In anticipation of this occasion, Valve has rolled out a new tool that conveniently lets players know which games from their own library are compatible with Steam Deck. 

Valve has been touting its new(ish) “Deck Verified” program for a while now as its teams go through the painstaking process of reviewing Steam’s 50,000-plus titles, then tossing them into metaphorical bins depending on how well they play on Steam Deck. As most know by now, games that have undergone this process receive a verified, playable, unsupported, or unknown label. (The difference between verified and playable is that playable games require a bit of adjustment on the player’s part in order to offer a pleasant experience.) As of our last check-in on February 16, the program had reviewed 580 games total, with 309 verified, 211 playable, and 60 unsupported.   

But in order to find out exactly which games from their library were considered verified or playable, players had to search through a pair of unofficial lists as they were updated. This was inconvenient, and obsessively checking a list every day to find out if I Am Bread has finally been reviewed is no way to live. (It hasn’t been, in case anyone was wondering.) So Valve came up with a better solution, just in time for its first console’s debut: a simple little webpage that tells those with existing Steam libraries which of their games have been through the Deck Verified review process. 

Each verified, playable, or unsupported title from a player’s library comes with a “Steam Deck Compatibility” button that, when clicked, provides more information about the how and why that game is—or isn’t—a good match for Steam Deck. Clicking the button reveals how legible in-game text is on Steam Deck, how well the game’s default graphic configurations perform, and other considerations that are important to determining how enjoyable gameplay might actually be. Verified games pretty much say they’re ready to go right out of the box, while playable games might need players to change a few settings before settling in. 

The new tool also tells players how many games from their library remain untested—a number you can still refresh and obsess over, if you want. 

Now Read:



from ExtremeTechExtremeTech https://ift.tt/wAzaxq9

NEWS TECHNOLOGIE

Regulatory wrangling earlier this year threatened to scuttle the much-anticipated rollout of new 5G networks from AT&T and Verizon. However, the carriers and US aviation officials were able to come to an agreement about how and where the new c-band spectrum could operate. Now, the FAA has issued an airworthiness directive aimed at the Boeing 737 that sheds some light on the issues that have delayed a full c-band rollout. According to the document, these aircraft could suffer altimeter malfunctions in the presence of certain 5G frequencies. 

This mess could have been avoided, but it seems no one took responsibility for ensuring airlines made several necessary upgrades. The Federal Communications Commission announced in 2020 that it planned to auction off spectrum that was previously used for satellite TV operations. Now that the c-band (a block of spectrum around 4 GHz) is in the hands of carriers, it will broadcast at much higher power around the country. That’s good for your mobile data but bad for older radio altimeters. 

The FCC built in a buffer zone to prevent the c-band from leaking into the spectrum reserved for air traffic. However, many older aircraft needed upgrades from their old, inefficient altimeters to prevent interference. That didn’t happen on time, and so the FAA stepped in to make sure 5G didn’t pose an immediate danger to travelers. The new directive (PDF) singles out Boeing’s 737 as a major issue for 5G. This is an extremely popular model, and the systems in many of them will not operate correctly in the presence of 5G in the 3.7-3.98 GHz range. 

A 5G millimeter wave cell site on a light pole in Minneapolis.

Radio altimeters are part of the automated landing systems on many modern planes—they’re not restricted to the 737. Interference like the FAA has identified can cause failures in the “autopilot flight director system, autothrottle system, flight controls, flight instruments, traffic alert and collision avoidance system (TCAS), ground proximity warning system (GPWS), and configuration warnings.” The agency says this could cause an unacceptable increase in “flight crew workload.” That seems like a gentle way of saying 5G could affect safety. 

The notice affects 2,442 planes in the US and another 8,342 worldwide. There’s no immediate danger, though. The directive does not require the grounding of any aircraft, but it could complicate carrier efforts to roll out more 5G. The aircraft are free to continue operating in areas where 5G has been mitigated as required by regulators or where it doesn’t exist yet. AT&T and Verizon have agreed to keep c-band clear of airports and to operate at lower power levels in the surrounding area. The carriers agreed to keep things like this until the summer, at which time there will probably be another round of recriminations.

Now read:



from ExtremeTechExtremeTech https://ift.tt/pendr1B

NEWS TECHNOLOGIE

With Nvidia’s recent RTX 3050 likely the last in the Ampere family, eyes have turned to what lies ahead with RTX 4000. Unfortunately, that glimmer on the horizon might be a giant fireball, possibly emblazoned with the words “Ada Lovelace.” Nvidia’s next-generation RTX 4000 GPU architecture is reportedly so power-hungry we hope the rumors aren’t true. If they are, an awful lot of gamers may need power supply upgrades just to run the card.

According to noted Twitter GPU soothsayer Greymon55 via Videocardz, Nvidia’s upcoming AD102 chip, which represents the full version of it’s next-gen architecture, will come in three versions. The variations will reportedly consume 450W, 650W, and an eye-popping 850W, though Greymon55 adds that this is subject to change. They also noted in a later tweet that what they are posting aren’t always “leaks,” but more just their personal “thoughts and guesses,” so go and find the biggest grain of salt you can to consume with this information.

@Greymon55 on Twitter, who probably works for a power supply company. That’s a joke by the way.

Interestingly, another famous GPU-predicting Twitter account chimed in as well, saying they’ve heard similar rumors but with slightly lower TGP values. Twitter user @kopite7kimi noted, “That’s just a rumor. I’ve heard 450/600/800W for 80/80Ti/90 before. But everything is not confirmed.” We should pause here and appreciate the irony of one rumor monger telling another it’s “just a rumor.” Also, if you’re thinking it would be just Nvidia launching a GPU with these types of insane power requirements, think again. Greymon55 notes that they would only do it to match or best AMD, and wouldn’t do it if AMD planned to launch 350W RDNA3 cards, for example, which are due around the same time as Nvidia’s Lovelace cards. They wrote on Twitter that for AMD’s next-gen “power consumption will not be low, but probably not crazy.” Going back to the Nvidia rumors, Greymon55 also tweeted that anyone who expected to buy an AD1020-based GPU, such as an RTX 4080 Ti or 4090, would need a 1,500W power supply.

As you may recall, we previously reported on these two Twitter accounts previously posted rumored specs for both Nvidia’s AD102 and AMD’s RDNA3 cards, and if they are anywhere close to the truth both camps are taking the “all guns blazing” route for their upcoming GPUs. What will be particularly interesting in the next round of the GPU wars is both companies are using TSMC’s 5nm process, which is the first time in recent memory that both companies will share manufacturing technology for competing product lines. We wrote at the time that these GPUs could end up consuming around 500W, which seemed high at the time. Now there are rumors that the incoming RTX 3090 Ti might come close to that number itself, with rumors circulating it might consume 450W. It makes sense that power consumption might increase in Nvidia’s next-gen products despite moving to a smaller manufacturing node (8nm to 5nm). The death of Dennard scaling in 2004 has left GPU manufacturers unable to reduce voltage to compensate for higher clock speeds and we can see that reflected in the way GPU TDPs have steadily climbed over the last few years.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/HdDwXCy

NEWS TECHNOLOGIE

When Intel launched its all-new Alder Lake platform in late 2021, it was heralded by the company as its return to greatness after it spent several years launching lackluster updates to its existing 14nm platforms. During this time AMD had clearly leapfrogged the company, using TSMC’s more advanced 7nm node to deliver exceptional products that allowed the company to capture significant mind and market share in the DIY enthusiast space. All that looks to be changing now according to at least one source. The CPU benchmarking software Passmark has released the results of CPUs submitted to its software for testing, showing a clear reversal of fortunes in the desktop CPU space, and it’s a trend that started at the end of 2021. Alder Lake is gaining market share.

In figures released by Passmark showing the submission rates for CPUs going back almost 20 years, a clear trend has emerged. Whereas AMD appeared to leave Intel in the dust starting around 2019 or so, with AMD surging and Intel falling, that pattern has now reversed. In a tweet and blog post, the company noted the reversal started in Q3 of 2021, and is showing no signs of slowing down any time soon. The included chart for the DIY desktop CPU market shows that as of Q1 of 2022, Intel had clawed its way back to 58.5 percent market share, with AMD declining from its peak of 50.7 percent a year ago to just 41.5 percent in the last quarter.

Desktop CPU market share from Passmark. (Image: Passmark)

In the notes for its numbers above, Passmark writes, “This graph counts the baselines submitted to us during these time period and therefore is representative of CPUs in use rather than CPUs purchased.” As Wccftech notes, AMD’s market share is currently where it was prior to the company launching its lauded Ryzen 5000 series CPUs. The reasons for AMD’s surprising turnaround likely include the chip shortage, which required AMD to focus only on its more expensive chips in order to protect its profits. It also doesn’t offer a true entry-level CPU, and has so far refused to allow widespread support for its 5000 series CPUs on older X370 motherboards. This is ostensibly due to compatibility reasons, though companies like  ASRock have allowed it despite AMD’s wishes.

Market share for all cpus, not just desktop. (Image: Passmark)

The same trend is evident in the company’s chart for “all CPUs,” which includes laptops. Somewhere towards the end of last year AMD began to trail off while Intel began to surge, which coincides with the timing of Alder Lake’s launch. Obviously the numbers here don’t specify which Intel CPUs began showing up more prominently in its benchmark results, but the timing of the swing is certainly an interesting coincidence.

Before you bust out the violin for AMD here, it should be noted this is just one data point for this particular benchmark. Overall AMD is still in the best position its ever been in financially. It finished 2021 with record breaking profits, which increased 68 percent year-over-year. It also finished the year with its highest CPU market share since 2006. The company is also about to launch its first CPU update in quite some time, as it should deliver its first 5000 series CPU with V-cache in the near future. However, that update will be limited to just one CPU, the Ryzen 7 5800X3D, which will be an 8C/16T CPU with a 3.4GHz base clock and a 4.5GHz boost. Whether or not that lone CPU will be enough to upset the apple cart remains to be seen, but all indications are Intel is in the driver’s seat for now with its Alder Lake platform. AMD will respond in turn with Zen 4, which should be available later this year.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/8GFTMOk

الخميس، 24 فبراير 2022

NEWS TECHNOLOGIE

Russian forces are flooding into Ukraine today, following a declaration of war by President Vladimir Putin. Airstrikes began on Wednesday, and ground troops are beginning to carve up the border regions as they make their way toward the capital. There’s an extra layer of danger in this war that most don’t have: a failed nuclear reactor. Ukrainian authorities confirmed today that Russia has seized the decommissioned Chernobyl nuclear power plant, and that could have serious consequences for all of Europe. 

The Chernobyl nuclear facility was brought online in 1977, and for nine years it operated without incident. Then, in 1986 a series of technical errors during a safety test set the No 4 RBMK reactor on a path to meltdown. An explosion exposed the reactor to the environment, scattering radioactive debris across the city of Pripyat. Fallout from the event slowly spread across Europe, which made it impossible for the USSR government to keep the incident under wraps. 

In the decades since the reactor failure, the world has worked to decommission the reactor and contain the radiation. To this day, the general public is not allowed in the Chernobyl exclusion zone, but Russian forces are now in control of the region, and the Ukrainian government claims there was hard fighting as Russia pushed its forces out of the zone. 

Back in 1986, USSR experts barely managed to extinguish the burning reactor before it melted down and contaminated groundwater, but the site will be dangerously radioactive for generations to come. Ukraine temporarily sealed Chernobyl with a shroud known as the sarcophagus, but that was replaced in 2016 with a structure called the New Safe Confinement (NSC). It’s the arched structure seen above. 

The good news is that the NSC was built to stand for a century or more. It’s resistant against extreme weather, fire, and earthquakes. However, reports just last year suggested the fissile material inside the NSC is heating up again. It still contains about half of the original uranium fuel, now transformed into a radioactive sludge inside the remains of reactor 4. Ukrainian authorities were previously monitoring the neutron levels inside the NSC, watching as they doubled in the years since construction finished. Now, no one is watching those levels or checking for damage. 

In general, having a war in the immediate vicinity of a failed nuclear reactor is a bad, bad idea. The original disaster only killed 50 people directly, but the UN estimates a further 4,000 around Europe died from exposure to Chernobyl’s radioactive fallout. It’s also possible that troops marching through the exclusion zone could encounter areas of high radiation. For example, places where graphite from the initial explosion landed can have radiation levels a thousand times higher than normal even more than 30 years later. Any activity in the region risks disturbing and distributing these dangerous particles to other areas, but that’s nothing compared to a damaged NSC — an ongoing leak from Chernobyl could be catastrophic. 

It remains unclear what will become of Chernobyl as the war between Russia and Ukraine heats up. Authorities around Europe will no doubt be on the lookout for radiation spikes that could indicate damage to the reactor’s protective shield. We can only hope Russia ensures the integrity of the NSC.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/NfocOsT

NEWS TECHNOLOGIE

Credit: NASA

One fateful day about 66 million years ago, a large object crashed into the Yucatan Peninsula. The resultant firestorm and long, cold winter wiped out 75 percent of life on Earth, including most dinosaurs. Scientists have studied the so-called Chicxulub impact for decades but only on millennia timescales. For the first time, researchers have zeroed in on when the asteroid or comet walloped Earth, and they say it happened on an otherwise pleasant spring day. This could help explain why some species survived and others perished. 

The impact extinction was first proposed in 1980 by geologists Luis and Walter Alvarez, and is therefore known as the Alvarez hypothesis. The best evidence for this idea is the Chicxulub crater, discovered in Mexico several years before the Alvarez family had their say. The crater is the result of a large asteroid or comet impact, and its location in the geological record matches the Cretaceous–Palaeogene mass extinction that ended the reign of dinosaurs. This is the scientific consensus today, although the evidence is trending toward asteroid rather than comet. 

One problem for the Alvarez hypothesis has been why the extinctions were so selective. All the non-avian dinosaurs died out, as did the majority of marine mammals and ammonites. And yet, many species of mammals would survive to eventually give rise to humanity. The mass extinction also spared crocodiles, birds, and smaller reptiles. The authors set out to determine if the pattern of extinctions had anything to do with the time of year when the impact occurred. The first order of business was to identify the season, and to do that, they went to North Dakota. 

The Chicxulub impactor is believed to have struck the surface at a very steep angle, imparting maximum energy upon landing. While it was only a few miles in diameter, it released a billion times more energy than the atomic bombs dropped in World War 2. The collision was so catastrophic that it shook the continental plate, sending a wave of water and sediment upstream (all the way to North Dakota and beyond) from what is today the Gulf of Mexico. The surge enveloped and instantly killed a huge number of fish, turning them into artifacts from the final day of the Cretaceous era. At the same time, glass beads of melted rock rained down on the landscape, ending up preserved alongside the fossilized fish. This is known as the Tanis event

A paddlefish that died on the last day of the Cretaceous era.

North Dakota has one of the best Tanis deposits, allowing researchers to examine the conditions on Earth that fateful day. The team excavated paddlefish and sturgeon from the deposit (see above), ending up with several lower jaws, teeth, and pectoral fins. The remains were scanned using X-ray tomography and rendered in 3D for analysis. There are minute cellular changes in many fish species throughout the seasons, and that can tell us the point in their growth cycle they died. In this case, all six representative specimens died in the spring (northern hemisphere). The glass beads were present only in the gills, which showed the fish were alive and foraging at the end of the Cretaceous. 

The study speculates that many of the species that survived may have been prevalent in the southern hemisphere, where rather than gearing up to mate, they would have been hibernating in burrows or caves. That might have shielded them from the immediate fallout and helped them eke out a life in the ashes.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/YSkaXKg

NEWS TECHNOLOGIE

Apple just released the iPhone 13 this month, along with the new iPad Mini and Apple Watch 7. (Photo: Apple)

According to sources involved with Apple’s supply chain, the company plans to launch its rumored mixed reality headset later this year. This flies in the face of earlier reports that the headset had been delayed until next year due to a variety of issues Apple was struggling with. This would not necessarily have been a surprise given that Apple hasn’t previously launched a product like this before. Sources indicate that Apple is far enough along in its production process that the unnamed headset has reached the EVT 2 stage, which stands for Engineering Validation Testing, phase 2.

For those not privy to the ins-and-outs of Apple’s production process, this would mean the new device is about midway through its journey to final production. That’s according to a summary provided by 9to5Mac, which also flagged the supply chain sources at Digitimes. The company begins with hardware prototypes to get the design nailed down while a software team works in parallel. Once the design is finalized it moves into the Engineering Validation Testing phases, which are numbered. This is where the company combines all the software and the hardware and produces physical units for testing.

 

The prototype then moves through those various phases (1, 2, 3, etc.) and is iterated on until it arrives at the next phase, which is called Design Validation Test. This where “real world” testing is done to make sure the device works properly and will not break during normal use. The next phase, Production Validation Testing, is the last.

In PVT the company tests its manufacturing process for the device it’s creating. As 9to5Mac reports, the iPhone 14 has reached this phase, and we all know that phone will launch in September, so the headset could be only several months behind. Apple might officially launch the headset at its October event, which is usually focused on new hardware. This would track with earlier rumors that the headset would actually arrive in 2023.

A mock up of a theoretical Apple headset by concept designer Antonio DeRosa. (Image: @aderosa75 on Twitter)

On the surface this seems like good news in a time of constant delays for anything that involves semiconductors, gaming, or just fun in general. However, we still don’t know exactly what market Apple will be targeting with its new headset. Reports have varied wildly on what Apple’s goals are for the device. So far it’s been reported that the company’s initial headset will be a powerful and expensive device that runs on silicon similar to that in its M1 Macs, and it will not be metaverse friendly. It might even be even made to target enterprise customers with an expensive software subscription model, similar to the current Microsoft HoloLens. This would make sense as Apple tends to enter markets only when margins are rather high.

Once that ball of wax is up and running Apple will eventually introduce its version of AR glasses, which is reportedly what Apple thinks will eventually replace the iPhone, possibly on a ten year timeline. It certainly seems fanciful here in 2022 that we’ll all be wearing AR glasses around someday. Previous attempts to bring such products to market have failed. However, it’s never wise to discount Apple’s ingenuity. The company is known for arriving late to a market, like smart watches, and becoming the market and/or profit leader within several years. It will be interesting to see what angle Apple takes on a market dominated by its arch nemesis Meta and its Quest 2 headset. Alternately, Apple might take aim at Microsoft’s HoloLens instead. There are rumors that Microsoft might abandon HoloLens 3, which  could leave the market primed to welcome a new provider.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/tbWuXDQ

NEWS TECHNOLOGIE

(Photo: Lip Kee Yap/Wikimedia Commons)
A recent study on the feasibility of a new GPS tracking device for wild birds found that Australian magpies engaged in “cooperative rescue behavior” to help each other remove the tracking devices. 

Scientists from the University of the Sunshine Coast and the University of Queensland originally set out to test miniature backpack-like tracking harnesses on magpies, a type of passerine common to Australia. After allowing a group of ten magpies to adjust to the researchers’ presence over a few weeks, the team safely captured five of the birds and equipped them with tracking harnesses and colored leg bands for identification purposes. The magpies were then re-released for observation.

Almost immediately, the magpies got to work attempting to remove the harnesses. Four of the five magpies were found pecking at their own tracking devices within the first two days of observation. On the day of capture and release, one tracked magpie was seen attempting to remove its own tracker when another magpie, an untracked juvenile, joined in by pecking at the harness in unison. The two were unsuccessful at removing the tracker, which prompted a third magpie, an untracked adult female, to approach and successfully peck the tracked magpie free of its harness. At the same time, another untracked magpie was helping a tracked one break free of its harness atop a powerline. (After falling off the powerline mid-peck, the magpies continued their mission in a tree, a process which was partially captured in the video below.) 

Animals have long been found to engage in cooperative relationships in the wild; the classic example of the oxpecker, a type of bird, and the zebra comes to mind. (The oxpecker hangs out on the zebra’s back, picking bugs from the zebra’s hair, resulting in a steady stream of food for one and convenient pest control for the other.) But these relationships usually constitute mutualism, in which both parties enjoy some type of immediate benefit. In the case of the magpies, those responsible for snipping the tracking devices did so without receiving any tangible reward. 

Such collaborative (and apparently altruistic) problem-solving behaviors may be a major reason why magpies have adjusted relatively well to extreme habitat changes caused by humans, including climate change, according to the researchers. Though the tracking device trial didn’t quite go as planned, those involved don’t feel the effort was wasted; the magpies’ immediate desire to free one another from their strange new harnesses appears to be a testament to their species’ prosocial behavior. 

“We never considered the magpies may perceive the tracker as some kind of parasite that requires removal,” said study author and behavioral ecologist Dominique Potvin for Australia’s ABC News. “Just like magpies, we scientists are always learning to problem solve. Now we need to go back to the drawing board to find ways of collecting more vital behavioral data to help magpies survive in a changing world.”

Now Read:



from ExtremeTechExtremeTech https://ift.tt/WbaxPwU

الأربعاء، 23 فبراير 2022

NEWS TECHNOLOGIE

There aren’t a ton of great ways to run Android apps on Windows, but there are several official methods on the horizon. Microsoft announced last year that Android apps would run on Windows 11 thanks to a partnership with Amazon. Windows 11 is widely available now, but the Android functionality is only beginning to roll out in preview. Even when it’s ready, you might not be able to enjoy this particular feature. Microsoft has provided recommended specs for Android apps, and you’ll need some serious hardware, reports PC World

Microsoft’s new support page for Android app functionality says you need at least 8GB of RAM, but 16GB is recommended. 16GB is pretty standard for high-end office machines and baseline gaming, but plenty of PCs still come with 8GB. Those machines will probably have a bad time running any hefty Android apps. Some computers even still have 4GB of RAM, and they won’t be able to run Android apps. After all, most basic Android phones only have 4GB of RAM, and they’re not running a full desktop operating system on the side. 

It’ll probably be a little easier to hit the storage spec — Microsoft says you’ll need an SSD rather than a spinning drive. SSDs have considerably faster performance than even the most efficient traditional hard drives. The CPU situation will rule out more machines, though. On the Intel side, you need at least an 8th Gen Core i3, which launched in 2017. AMD fans will have a little more trouble as they need a Ryzen 3000-series, which came out in 2019. Yes, even those beefy 8-core second-gen chips won’t work. It’s unlikely this is a matter of raw power — Ryzen 3000 only boosted performance by about 15 percent. The newer Zen 2 and Zen 3 Ryzens may support technologies that Microsoft needs. For the very few Snapdragon-powered Windows machines, you’ll need the Snapdragon 8c or above. Regardless of your hardware, you’ll need to be on the new February update

Ryzen chips from just a few years ago won’t work with Android apps on Windows.

If you have a Windows computer from the last few years, there’s a good chance you’ll be able to run Android apps on it. Assuming, of course, you can run Windows 11, which requires a TPM. It should not come as a surprise that Microsoft needs so much power to run Android apps. Virtualization always comes with some overhead, especially when moving between computing platforms. Even Google is not immune. The Android maker announced a beta for its own Google Play Games on Windows, allowing you to get content from its store rather than Amazon’s. It requires at least eight CPU threads, 8GB of RAM, an SSD, and a “gaming-class” GPU. That last one will be tough in an era when video cards go for twice their MSRP. By that standard, Microsoft’s system requirements are downright reasonable.

Now read:



from ExtremeTechExtremeTech https://ift.tt/zyZtYsM

NEWS TECHNOLOGIE

A new combination of software and VBIOS is reportedly able to bypass anti-mining restrictions put in place by Nvidia on some of its Ampere GPUs. As you may recall, Nvidia began releasing Low Hash Rate (LHR) versions of its GPUs last year in an effort to thwart crypto miners gobbling up all its GPUs for mining purposes. The company hoped this would allow throngs of desperate gamers to buy its GPUs instead. The feature didn’t really change things much as far as availability or pricing, but Nvidia at least made an attempt. Now a software developer has created a tool that supposedly unlocks any LHR card’s full Ethereum mining potential, assuming the GPU owner has the chutzpah to flash his or her GPU’s BIOS to make it work.

The software utility is called Nvidia RTX LHR v2 Unlocker by Sergey, according to a report by Videocardz. The utility flashes the BIOS of the GPU in question to restore its full mining powers. The one caveat is that it must be used with a custom modded driver also made by Sergey, so you’re talking about some experimental activities here with expensive and hard to find hardware. We recommend only the brave attempt it.

On the Github page the developer notes that if something goes wrong you can revert both the drivers and BIOS back to their original state. Experienced hardware junkies may feel a twinge of PTSD over failed past updates just reading about it.

The LHR Unlocker tool in action. (Image: Github)

The utility works with gimped Ampere GPUs. This includes every model starting with the RTX 3060 and going up the stack to the RTX 3080 Ti. The utility also works with some of Nvidia’s A-class GPUs, which are workstation boards. The software will be available on February 26th, so you have some time to work up your bravado.

The software and BIOS combo supposedly will allow quite a boost in mining rates. For example, the RTX 3080 LHR is listed as offering a hash rate for ETH of 71.7 MH/s on Hashrate.no, but Sergey’s software will lift that to around 100 MH/s. The RTX 3070 Ti will see a bump from 57 MH/s to 69 MH/s, supposedly.

The developer-provided list details the new and improved hash rates. (Image: Github)

This may be bad news for gamers. The new combination goes beyond previous efforts such as NBMiner,  which were only able to restore about 70 percent of the cards’ mining capabilities. That said, the software hasn’t even been released yet, so we’ll have to wait and see what it can do in the real world. The crypto market in general has been extremely volatile lately. With the impending move to Proof of Stake for ETH it remains to be seen what 2022 holds for GPU miners. Neither AMD nor Intel have announced any type of hash rate limiting measures for their current and upcoming GPUs.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/UcsPVJb