الخميس، 30 يونيو 2022

NEWS TECHNOLOGIE

The smartphone in your pocket is probably more powerful than the computer you were hauling around just a few years ago. All that power means you can run dozens of apps, stream countless hours of video, and even snap photos that are worthy of framing. But that’s just the start — there’s a lot more untapped power in your phone, provided you’re willing to root Android. In the first few years of Android’s existence, this was a fairly simple procedure on most devices. There were even apps and tools that could root almost any Android phone or tablet with a tap, and you’d be ready to truly master your device in mere minutes. As Android became more capable, the allure of rooting has diminished somewhat — it’s also much harder and comes with more drawbacks.

The advantages of rooting

Gaining root access on Android is akin to running Windows as an administrator. You have full access to the system directory and can make changes to the way the OS operates. As part of rooting, you install a management client like Magisk — SuperSU used to be the top option but has fallen into disrepair. These tools are basically the gatekeeper of root access on your phone. When an app requests root, you have to approve it using the root manager.

In the case of Magisk, you can also use the client to make other changes to the phone via numerous community-developed modules. Let’s say you don’t like the system theme on your phone. With root, you can change that. You can also manually back up app data so you never lose it again. Want to change the way your device’s CPU characteristics? That’s also possible with root.

If you’ve ever looked at your phone and thought, “I wish I could do [some very specific thing],” rooting might make it happen. In the past, gaining root allowed you to add necessary features to Android — people rooted phones to get features like imposing low-power sleep for apps, managing permissions, and taking screenshots without a PC. Over time, these features have been brought into the standard Android feature set, making root less useful for casual phone enthusiasts. Most of the best use cases for rooting today are on the highly technical side. For example, you can replace the kernel on your phone to change the way the hardware operates at the lowest level, or you could forcefully block ads from appearing on your phone.

Modern tools like Magisk are also “systemless” root managers. That means the changes are stored in the boot partition rather than modifying the system. That makes it easier to go back to an unrooted system (or make apps think you’re unrooted) than it used to be.

The Risks of Rooting

Rooting your phone or tablet gives you complete control over the system, but there are risks to rooting, and you should only do it if you know what you’re getting into. Android is designed in such a way that it’s hard to break things with a limited user profile. A superuser, however, can really trash the system by installing the wrong app or making changes to system files. The security model of Android is also compromised when you have root. Some malware specifically looks for root access, which allows it to really run amok.

For this reason, most Android phones are not designed to be rooted. There’s even an API called SafetyNet that apps can call on to make sure a device has not been tampered with or compromised by hackers. Banking apps, Google Pay, and others that handle sensitive data will do this check and refuse to run on rooted devices. Magisk supports hiding root, but that won’t always work. It’s a constant game of cat and mouse with Google. If losing access to high-security apps is a big deal, you might not want to mess around with root.

Root methods are sometimes messy and dangerous in their own right. You might brick your device simply trying to root it, and you’ve probably (technically) voided your warranty doing so. Rooting also makes it harder (or impossible) to install official updates, and ROMs like Lineage can be difficult to install and buggy once you do. If having root access is really important to you, you might be left waiting on flawed software while you beg for a new root method or a modded OS update.

Should You Do It?

If you’ve been using Android for a while, you’ve probably noticed gaining root access on most devices is much harder than it once was. There were exploits years back that could root almost any Android device in a few minutes, but that’s much less common now. The last essentially universal exploit was Towelroot in mid-2014, but Google patched that rather quickly. Google patches these flaws often before we even know they exist because having active exploits in the system is a very bad thing for most users. These are security holes that can be utilized by malware to take over a device and steal data. There are monthly security updates to patch these holes, but on a rooted phone, you are responsible for security. If you’re going to root, you have to accept that your device will require more frequent attention — the security safety net offered by Google and the device maker won’t be there to save you.

If you’re not familiar with Android’s tools and how to fix issues with a command line, you probably shouldn’t dive into rooting your phone. Root can be a lot of fun to play around with, but it can also lead to plenty of frustration as you try to fix errors caused by overzealous modding. If you bought your phone with the intention of tinkering, by all means, go nuts.

When something does go wrong (and it will at some point), it’s all on you to fix it. You might be left scouring old forum posts and begging for help in chatrooms to fix your phone. You have to be willing to tackle some vexing issues if you’re going to live the rooted lifestyle. You also have to look at what you’re getting; Android in its unmodified state is much better than it used to be. Most people just don’t have a good reason to root phones anymore.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/ozbHatu

NEWS TECHNOLOGIE

Some of the images beamed back to Earth by robotic explorers like Perseverance can make Mars look almost familiar, in an arid desert sort of way. The conditions on the surface are completely alien, though. It’s frigid, and the atmosphere is so thin that clouds are rare, but NASA believes understanding more about the red planet’s cloud cover could help us understand how it got that way. Enter, you, citizen scientist. NASA has a new tool where anyone can help advance its research by hunting for signs of Martian clouds in a massive data set. Trust us, it’s more fun than it sounds. 

The more we learn about Mars, the clearer it becomes that it went through major changes in the distant past. Today, the world is a craggy wasteland, coated in a layer of ultra-fine dust. A few eons ago, Mars had flowing water and (probably) a thick, Earth-like atmosphere. Now, the atmosphere is only one percent as dense as ours, and most of that is carbon dioxide. 

Even with its scant atmosphere, Mars does have light cloud cover around 30-50 miles (50-80 kilometers) above the surface (see above, an image captured by Curiosity in 2021). Scientists believe understanding cloud formation will offer insights into the mechanisms that stripped Mars of its atmosphere, and luckily, we have 16 years of data on it. The Mars Reconnaissance Orbiter (MRO) has been whirling around the planet for almost two decades, and it has an instrument called the Mars Climate Sounder. 

According to Armin Kleinboehl of NASA Jet Propulsion Laboratory, there’s just too much data for the small team at JPL to dig through, and that’s where you come in. The Cloudspotting on Mars website will let you scour the data in search of clouds, but you’ll need to run through a tutorial to have any idea what you’re doing. 

The Mars Climate Sounder scans nine spectral channels in the visual and infrared spectrum to measure temperature, humidity, and dust content in the Martian atmosphere. The raw data from the instrument is presented as a graph (example above), with altitude on the Y-axis and time on the X-axis. As the probe orbits, clouds peek up above the background signal, peak, and then drop off — so they appear in the data as arches that you can readily identify. The tool runs you through how to look at different frames with varying brightness, as well as inverting the image to spot faint arches. All you have to do is mark the peak, which corresponds to the altitude of the cloud. 

It’s surprisingly fun to go hunting for clouds on Mars, and you don’t even need to sign up to start helping out. That said, the site does encourage you to register so you can participate in discussions and be cited in research in case you spot anything important.

Now read:



from ExtremeTechExtremeTech https://ift.tt/fF6Rmdj

NEWS TECHNOLOGIE

Image Credit: Peellden, Wikimedia Commons, CC BY-SA 3.0
(Photo: Peellden, Wikimedia Commons, CC BY-SA 3.0)
Taiwanese chip-making powerhouse TSMC is telling its customers to get with the times. The company recently stated it’s attempting to get customers using older, antiquated nodes to transition their products to newer nodes, such as its 28nm process. TSMC says it will even help them move, like a good friend with a pickup. This applies mostly to companies using the comparatively ancient 40nm and 65nm nodes.

News of TSMC’s plans come from Kevin Zhang, senior VP of business development at TSMC. According to Anandtech, TSMC has no plans to build any more fab capacity for some of its older, mature nodes. All the capacity that exists today for 40nm and beyond is all that will ever exist. “We are not currently [expanding capacity for] the 40 nm node” he said. “You build a fab, fab will not come online [until] two year or three years from now. So, you really need to think about where the future product is going, not where the product is today.” Anandtech says TSMC currently gets about 25 percent of its revenue from mature processes like 40nm and larger. However, those nodes are long since paid for, and the wafers are cheap due to their age. This reduces profit-per-wafer for TSMC, which is a likely motivator for it to nudge its customers into a more modern node.

28nm cost scaling

From TSMC’s perspective though, while customer pays more for a more advanced node, that move comes with obvious benefits. According to Zhang, some customers might question such a move when the 40nm chip works just fine. “I think the customer going to get a benefit, economic benefit, scaling benefit, you have a better power consumption,” he said. Summarizing it, he stated, “you go to a next node, you get a better performance and better power and overall you get a system level benefit.” The customers will also get a lot more dies per wafer as well.

TSMC previously announced it will be increasing its capacity for mature and speciality nodes 50 percent in the coming years. This effort will see the company focusing on 28nm nodes in particular. It’s announced plans to build a fab in Kumamoto, Japan that will focus on N12, N16, N22, and N28 nodes. It’s also manufacturing three more fabs to assist in this process. Two of those facilities will be in Taiwan, with the third in China.

The majority of chips made on older nodes go into smart appliances, phones, IoT, and especially cars. It’s estimated that in a few years cars will feature over 1,500 individual chips. Currently most cars only use several hundred chips. This was notably apparent as the pandemic-based chip shortage caused turmoil in the auto industry.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/8kJ2uZ3

NEWS TECHNOLOGIE

(Photo: Mercedes-Benz)
The Mercedes-Benz VISION EQXX concept car has beaten its own single-charge range record, thanks to a 747-mile trip last week. 

In April, the company’s stab at “the silver bullet to the electric road trip” completed a 626-mile trip on a single charge. But after driving from Stuttgart, Germany to Cassis, France, the electric vehicle’s battery sat at about 15 percent capacity. This prompted the engineering team to wonder how much further the EV would be able to go. 

Cue take two: a 14.5-hour trip that took the VISION EQXX from Stuttgart to Silverstone, UK across two days. The team strove to imitate real-world conditions as much as possible, enduring everyday challenges such as summer heat and high traffic density. Nyck “The Dutchman” de Vries, who races for the Mercedes-EQ Formula E team, conducted the end portion of the trip as a guest driver. De Vries took the VISION EQXX up to its maximum speed limit of 87 miles per hour during 11 trips around Silverstone’s famous race track before exhausting the vehicle’s charge on the pit lanes.

“Yet again, the VISION EQXX has proven that it can easily cover more than 1,000 km on a single battery charge, this time faced with a whole different set of real-world conditions,” said Markus Schäfer, Chief Technology Officer for development and procurement at Mercedes-Benz. “As Mercedes-Benz strives to go all-electric by 2030 wherever market conditions allow, it is important to show to the world what can be achieved in real terms through a combination of cutting-edge technology, teamwork and determination.”

(Photo: Mercedes-Benz)

Single-charge EV ranges floated around 250 miles at the beginning of last year. Mercedes had to triple this median in order to bring the VISION EQXX to the level of success it saw last week. While the easy answer would’ve been to build a bigger battery, this would have weighed the vehicle down. Instead Mercedes tried its hand at using lightweight materials to create a unit that didn’t sacrifice size for capacity. The result was an energy-dense battery that was about 30 percent lighter than the one it started with, and half the size. 

The battery doesn’t have to power the EV’s interior, either. Rooftop solar panels power most of the interior technology—which, by the way, makes the VISION EQXX look like something out of the 1964 New York World’s Fair. White seats, brushed steel accents, and cool-toned ambient lighting combine for a sleek aesthetic to match the vehicle’s impressive range. Mercedes says it used animal-free textiles, like cactus fibers, mushrooms and vegan silk, to craft the attractive yet practical interior. 

The VISION EQXX is a concept vehicle and isn’t currently slated for production. That being said, the vehicle—which the company is calling “the most efficient Mercedes ever built”—presents a likely irresistible challenge to other EV makers, including Tesla, whose cars currently max out at about 400 miles on a single charge. 

Now Read:



from ExtremeTechExtremeTech https://ift.tt/qSFKMeW

NEWS TECHNOLOGIE

(Photo: Gilles Lambert/Unsplash)
Most know by now that last week, the Supreme Court finalized its decision to overturn Roe v. Wade, the 1973 landmark ruling that constitutionally protected the right to abortion. The product of the decision is a jumble of states with varying levels of protection and prosecution. But among people’s concerns regarding bodily autonomy, medical legal checks and balances, a new, insidious problem has quietly risen to the surface: health apps are using people’s grief and anxieties to market their products. 

As many have pointed out, post-Roe America presents a complication that pre-Roe America did not: an unprecedented level of digital surveillance. Those seeking reproductive care in certain states now run the risk of being prosecuted by a court system that will subpoena service providers and software developers for any data that hints at a pregnancy. Ever since the Supreme Court’s Roe decision draft leaked last month, people have worried that digital period trackers (or health apps that otherwise contain a period-tracking feature) might reveal a blip in someone’s menstrual cycle, thus aiding abortion prosecution. 

This, unfortunately, is a well-founded worry. Like we discussed in May, digital evidence has already been recruited in court cases concerning reproductive autonomy. Most health apps, including period trackers, store user data outside of the user’s device. This means the end user doesn’t retain all control of their data. Companies can decide independently if they want to sell user information to data brokers, who often supply data to the government for both malevolent and well-intentioned purposes. And if the government comes knocking, the company—not the user—gets to decide whether to open the door.

So then it’s a matter of getting companies to promise not to open the door…right? Perhaps not. The world’s most powerful tech companies are refusing to say whether they’ll continue their relationships with historied data brokers or respond to law enforcement subpoenas related to reproductive care. In light of the Supreme Court’s decision, Vice’s Motherboard asked a number of social media, telecommunications, digital finance, and rideshare companies if they will “provide data in response to requests from law enforcement if the case concerns users seeking or providing abortions.” Included were Apple, Facebook, Twitter, TikTok, Google, Amazon, Discord, and Uber. None of the companies provided an answer. 

Tech in general can be very “monkey say, monkey do.” If major apps and networks were to pledge not to share data with law enforcement when reproductive freedom is on the line, one could argue others, including smaller software developers, would do the same. Instead companies seem to be nervous to lead by example, electing to wait for someone else to pipe up—and likely make some measure of financial sacrifice—first. And until that happens (and companies actually follow through on their promises), it’s impossible to rely on any measure of health data integrity.

Meanwhile, smaller health app developers have seen a marketing opportunity in post-Roe anxiety. Stardust, a period tracking smartphone app that advertises itself as “women-owned, privacy-first,” quickly came up with a “hands off our bodies, hands off our data” marketing campaign over the weekend.  A majority of the app’s security messaging centers around end-to-end data encryption. But the app’s social media managers have ignored (and even deleted) comments asking whether Stardust will hand over data to law enforcement or work with data brokers: two practices that would essentially render the app’s so-called security promises moot. Stardust became the most-downloaded free app on iOS in the days immediately following the Supreme Court’s decision, despite its privacy policy stating it would comply with law enforcement data requests. ExtremeTech attempted to reach Stardust for comment regarding its marketing, potential relationships with data brokers, and any potential amendments to its policy. Stardust didn’t respond. 

Stardust turned off comments on its Instagram post about data security, for reasons we can’t quite imagine.

GP Apps, the developer behind the Period Tracker app, has similarly attempted to appeal to those anxious about Roe’s overturn (though with less of an activist tilt). It recently put out a statement assuring users that it would not work with law enforcement on abortion prosecutions.  “We want to assure our users that we are adamantly opposed to government overreach and we believe that a hypothetical situation where the government subpoenas private user data from health apps to convict people for having an abortion is a gross human rights violation. In such a scenario, we will do all we can to protect our users from such an act,” the statement reads. Further down it explains that users have a choice to use the app offline, thus keeping data local. Still, the app’s privacy policy generally says it will comply with subpoenas and other legal requests. Right now it’s unclear whether Period Tracker intends on amending the official policy, and it’s unlikely that a public statement would carry as much legal weight as a privacy policy. (One thing’s for sure: an Instagram post would not.) 

It’s a harsh lesson in media literacy. Until companies begin writing reproductive choice protection into their privacy policies, only rigorous, sophisticated external testing can reveal whether their apps are truly safe post-Roe. Without that, period trackers and other health apps can’t be considered truly secure—no matter what they’ve posted online or said at a press conference. 

Some hope legislation will help fill in the gaps. Senator Elizabeth Warren (D-MA) recently spearheaded the Health and Location Protection Act, which would ban the sale of all location and health data. The ban was specifically proposed in light of mounting worries regarding reproductive data privacy. Still, if the bill passes, it will only prevent companies from giving up user data in exchange for money; it will have nothing to do with private entities’ willingness to share information with law enforcement. 

Are the newfound risks associated with health apps worth the occasional convenience? Like many things, that’s up for individual users to decide. But as they currently stand, most health apps can’t entirely be trusted to keep user data private, and the stakes of a slip-up have never been higher. In keeping with the archaic theme, it might be best to stick to a pencil and paper.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/JKw3krS

الأربعاء، 29 يونيو 2022

NEWS TECHNOLOGIE

NASA has big plans for the next decade as humanity returns to the moon decades after the end of the Apollo program. While the agency is still fiddling with the Space Launch System (SLS) rocket at the heart of the new Artemis program, it’s already making plans for the Lunar Gateway station. It all starts with the newly launched CAPSTONE mission, which will test that station’s proposed orbit around the moon. 

The future of Artemis is about big rockets like SLS and the SpaceX Starship, but CAPSTONE is a small test vehicle. So, NASA chose Rocket Lab’s Electron rocket to take it into space. CAPSTONE, which is an excellent NASA acronym for Cislunar Autonomous Positioning System Technology Operations and Navigation Experiment, launched just before 6 AM today from Rocket Lab’s New Zealand launch facility. “While CAPSTONE’s journey to the Moon has only just begun, we’re proud to have safely delivered CAPSTONE to space,” said Rocket Lab CEO Peter Beck

The satellite itself clocks in at just 25 pounds (55 kilograms), but that’s no small feat for Electron. According to Beck, CAPSTONE was the largest and most challenging payload ever launched on the light-duty Electron rocket. The spacecraft is currently en route to the moon, where it will not be a permanent fixture. NASA will use CAPSTONE to validate its plans for the upcoming Lunar Gateway

Early in the Artemis program, NASA plans to perform short excursions to the lunar surface with the aid of the Starship-based Human Landing System. Closer to 2030, the Lunar Gateway will come online to support longer trips to the surface, but NASA wants to put it in a fancy new kind of orbit. CAPSTONE is headed for the station’s proposed near rectilinear halo orbit (NRHO) around the Moon. It will orbit at an altitude of 47,000 miles (76,000 kilometers) at the north pole but at just 2,100 miles (3,400 km) when passing over the south pole. This uneven orbit will ensure continuous communication, and it will be more fuel-efficient for station keeping thanks to gravitational interactions between the moon and Earth. 

CAPSTONE is in a stable Earth orbit after the launch, where it is still mated to Rocket Lab’s Photon spacecraft bus. In the coming days, Photon will use its HyperCurie engine to set CAPSTONE on the right trajectory before dropping off. CAPSTONE should be in its intended orbit in about four months, and the NRHO testing will take another six months after that.

Now read:



from ExtremeTechExtremeTech https://ift.tt/3jSVAEP

NEWS TECHNOLOGIE

(Photo: Dolenc090 on YouTube)
The name 3dfx certainly warms a lot of, um, mature gamers’ hearts. For a lot of us, the Voodoo 2 was our first actual 3D graphics card. And it was glorious. Even though we were gaming at 800 x 600, we loved it. Going from software rendering to hardware rendering for the first time was absolutely incredible. For the younger readers, it was similar to going from a hard drive to an SSD for the first time. The difference in performance left us speechless. Since then a lot of hobbyists have recreated a similar retro gaming environment to bask in the nostalgia. The only problem is to truly experience old school gaming, you need a CRT monitor. Those are very hard to find, and until now, 3dfx drivers didn’t support widescreen displays like we use today. Now a 3dfx forum user has uploaded a new driver that allows widescreen gaming, finally.

It’s called the 3dfx Wide Driver, and if you’re into this sort of thing you have to grab it now. It will only be hosted on the forum for 15 days, so don’t hesitate. It might be moved to a more permanent home, eventually. According to Hothardware, a user named Dolenc at 3dfxzone modified the driver, and posted it for download. It will allow for widescreen display usage both in the Windows desktop and in gaming. It supports Direct3D, OpenGL, and even Glide games, but Glide might not always work perfectly. Note this is Windows 98 and Me we are talking about here. You can try it on Windows XP but nothing is promised. OpenGL and Direct3D games should work as expected, but Glide games have some limitations. The aspect ratio 16:9 has a maximum resolution of 1600 x 900. For 21:9 it will max out at 1920×800.

The driver allows support for 16:9, 16:10, and 21:9 aspect ratios. The driver-maker uploaded a video (above) showing 1080p gameplay in several ancient (but much-loved) titles. They include the original Unreal, Unreal Tournament, Half-Life: Opposing Force, and several others. As you can see in the video gameplay is smooth and seemingly with a very good frame rate. He’s using a Voodoo 5 5500 though, which is the most powerful GPU 3dfx ever released, and also its last. It sported two VSA-100 chips and 64MB of 166Mhz SDRAM. One interesting note is the VSA-100 was built on a 250nm process.

After the 5500’s release the company folded, with the IP eventually being purchased by Nvidia in 2001. Its fabled Voodoo 5 6000 with its quad-chip VSA-100 design was never released. However, it remains a much sought-after collector’s item even to this day. At one point there was an “announcement” that a group of people were going to bring the company back to life and launch the Voodoo 5 6000, but that never came to pass.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/zoJgF8S

NEWS TECHNOLOGIE

(Photo: Ildar Garifullin/Unsplash)
It can’t be overstated: buying a car is ridiculously complicated. Between deceptive advertising, sleazy sales tactics, and confusing fees, most Americans rank buying their next ride as more stressful than getting married. Vehicle prices have also skyrocketed since the start of the COVID-19 pandemic, resulting in an extra layer of financial anxiety. 

The Federal Trade Commission (FTC) is hoping to change this. A new set of proposed rules announced Thursday would prohibit auto dealers from using bait-and-switch advertising practices or tacking on nonsense fees.

The first of the proposed rules would explicitly ban advertising that deceives customers into initiating a purchase, only to learn the real price or terms of the purchase are different from what was marketed. This usually takes the form of advertised low sticker prices, zero percent APR financing, and other hooks that end up being absent from the actual offered deal. After test-driving a vehicle or two, haggling with a salesperson, and undergoing a credit check, these bait-and-switch scenarios can feel confusing and even manipulative.

(Photo: Erik Mclean/Unsplash)

Another pair of proposed rules would prohibit dealers from implementing surprise or fraudulent “junk fees.” These are the miscellaneous items tacked onto the final price just before signing: “nitrogen-filled” tires, paint protection, UV coatings, and other (often invisible) add-ons that are not-so-conveniently forced upon unwitting buyers. Under the new rules, the FTC would require that dealers provide customers with the price of the car without these add-ons, and only add the extra items once the customer has provided their clear, written consent. 

A final rule rounds off the other proposals: dealers would be required to disclose to customers a vehicle’s true “offering price,” excluding only taxes and government fees. The price of any add-ons must be detailed in writing along with a disclosure stating such add-ons are not required to purchase or lease the vehicle.   

Automotive consumers generate a significant portion of FTC complaints. Despite previous attempts to engage law enforcement and mitigate deceptive auto dealer practices, complaints regarding vehicle sales and maintenance make up about 10,000 FTC complaints annually. A preliminary regulatory analysis estimates the rules’ net economic benefit would sit around $29 billion over a decade. 

As of now, the FTC’s proposed rules are just that; they don’t yet guide auto sales. The FTC is allowing 60 days for comments and questions from the public, which will help guide any revisions and determine whether the rules are implemented at all. 

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3IaAdr5

NEWS TECHNOLOGIE

We’ll have to wait a little longer to find out what’s up with one of the most interesting asteroids in the solar system. NASA has confirmed that a minor software glitch will cause a delay for its upcoming Psyche mission, which was set to launch in September 2022. The motion of the planets is working against NASA here, so even a small delay means Psyche won’t be able to launch in 2022 at all, and that puts the science phase of the mission toward the end of the decade. 

Psyche (the spacecraft) is named after the asteroid it will eventually visit. 16 Psyche (the asteroid) sits in the great asteroid belt along with uncountable other space rocks, but this one is special enough that NASA wants to get a closer look. It’s the largest known M-type asteroid, meaning it’s rich in metals. Scientists have speculated that Psyche is the exposed core of a planet that was smashed to pieces by impacts early in its formation. 

Naturally, getting a closer look at a planetary core is an alluring possibility. Researchers at MIT released the best map of Psyche’s surface yet, which could help NASA plan its observational campaign, and now the team will have more time to go over it. 

NASA says that it found a small issue with the flight software, which caused the initial delay from August to September. However, testing and validation of the fix won’t be done in time. The software in question manages the probe’s orientation and trajectory, and that system needs to work perfectly to ensure it can point its antennas at Earth. No communication, no mission.  

falcon heavy

Even with the powerful Falcon Heavy launch vehicle, NASA is at the mercy of physics here. The launch window closes on October 11 — after that, Earth will be moving away from Psyche, and NASA won’t have time to complete testing before the window closes. The only choice is to wait for Earth to make an orbit and launch Psyche in 2023. 

Before the delay, NASA expected Psyche to reach its target by 2026. If the 2023 launch goes ahead, the probe should reach the asteroid in 2029. There’s also a 2024 window that is not entirely out of the question, which would deliver the spacecraft by 2030. NASA was also hoping to use the Falcon Heavy’s massive payload capacity to send two “ride-along” missions with Psyche. The Janus mission was set to study binary asteroids, and the Deep Space Optical Communications technology demonstration is intended to test hi-speed laser communications with Psyche. NASA is evaluating both missions to see if they will go ahead as planned.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/IDEx9QF

NEWS TECHNOLOGIE

Earlier this year Nvidia was the victim of a hack on its network. The fallout was not trivial, as the group released a lot of proprietary information. It dumped the DLSS source code, information about upcoming GPUs, and also created workarounds for its anti-mining LHR technology. Now it’s AMD’s turn in the barrel, according to a new report. AMD has allegedly been hacked, with the perpetrators exfiltrating over 50GB of data. At this time it’s not clear if the data was taken directly from AMD or one of its partners.

The actual hack happened back in January of this year, but we’re just now learning about it. It’s not clear which group is responsible, as the outfit that is talking about it is either a middleman or bought the data from someone else. This group, known as RansomHouse, says on their website they don’t hack nor do they use malware. However, they are allegedly trying to negotiate a ransom from AMD. The group recently included AMD in an ominous list of companies on its website. It says the companies in the list “have either considered their financial gain to be above the interests of their partners/individuals who have entrusted their data to them or have chosen to conceal the fact they have been compromised.” This sounds like it should translate to “they haven’t paid the ransom.”

The RansomHouse group posted this summary on its darkest site. (Image Source: RestorePrivacy.com)

According to a summary by RestorePrivacy, the stolen data included network files, system information, and some AMD passwords. The group posted a sample of the data it has in its possession, but RestorePrivacy doesn’t say if it was able to verify the data or not. The group claims the data was easy to get as AMD used common passwords. These include the actual word “password,” as well as “123456” and “AMD!23” among others. The group says it has “more than 450 Gb” of AMD’s data. It’s not clear why it refers to it as gigabits instead of gigabytes. (Possibly to make the hack look larger – Ed)

RansomHouse claims its a “professional mediators community” instead of a hacking group. It claims it doesn’t create or deploy malware, nor does it encrypt any victims’ data. So far it lists six victims on its darknet website, including ShopRite, and the Saskatchewan Liquor and Gaming Authority (SLGA).

AMD has responded to questions about the breach with an official statement. “AMD is aware of a bad actor claiming to be in possession of stolen data from AMD. An investigation is currently underway,” said an AMD spokesperson.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/QHG5KNa

NEWS TECHNOLOGIE

Feature image by Eric Kilby, CC BY-SA 2.0
This week, the PCI-SIG working group that controls the PCI Express standard announced that it was on track to finalize and release the PCIe 7.0 standard by 2025. The amount of time between finalization and commercialization varies, but is typically 12-18 months. We might reasonably expect PCIe 7.0 devices in-market by 2026, with support for up to 512GB/s of bidirectional bandwidth.

At present, the PCIe 4.0-compliant platforms that are available today support transfer rates of up to 64GB/s in bidirectional mode. PCIe 5.0 is technically available, but GPUs and SSDs don’t widely support the standard yet, so PCIe 7.0 represents an effective 8x increase in bandwidth compared to what’s actually available today. The first PCIe 5.0 devices should be buyable towards the end of this year.

The Bandwidth Bonanza

PCI Express debuted on the desktop with the launch of AMD’s Socket 939 platform back in 2004. With support for up to 4GB of unidirectional bandwidth (8GB bidirectional), it blew the doors off the old PCI standard. The reason I mention PCI instead of AGP is because high-end GPUs have never been particularly limited by interface bandwidth. Comparisons back in 2004 showed that the gap between 8x AGP and PCIe 1.0 performance was essentially nil, while moving from PCI to PCIe (and from a shared bus topology to a point-to-point interconnect) immediately improved the performance of ethernet adapters, storage controllers, and various other third-party devices.

From 2004 – 2011, the PCIe standard moved ahead at a brisk pace, with PCIe 2.0 and 3.0 each approximately doubling bandwidth. Then, from 2011 – 2018, the consumer bandwidth market stood still. We didn’t see PCIe 4.0 until 2018, with the launch of AMD’s Zen 2 microarchitecture and X570 motherboard chipset. Since then, however, the PCI-SIG has been on a tear. PCIe 5.0 deployed with Alder Lake in 2021, even if consumer hardware isn’t available yet. We don’t know when PCIe 6.0 might be available in consumer products, but 2023 – 2024 is a realistic time frame. Now we see those chips won’t even be in-market for more than a few years before PCIe 7.0 hardware starts pushing in.

So what changed?

Some of the issues were technical — there were real difficulties associated with continuing to ramp up bandwidth between PCIe 3.0 and PCIe 4.0, and some new signaling and material engineering challenges had to be solved. It’s also true, however, that there wasn’t a lot of pressure to beef up system interconnects during the same time period. That’s changed in the past few years, probably at least partly due to the increased presence of GPUs and multi-GPU servers. Intel and AMD are both much more concerned with interconnects and maximizing connection between the CPU and other accelerators like FPGAs and GPUs.

Another major difference between the late aughts and the present day is the near-ubiquitous nature of SSD storage. Mechanical spinning drives are slow enough that faster PCIe speeds above 1.0 offered limited benefits. That’s not the case any longer. We can reasonably assume that new PCIe 5.0 drives will deliver an appreciable fraction of maximum bandwidth. Ditto for PCIe 6.0 and 7.0 when these standards arrive.

PCIe performance increases are typically associated with GPUs, but it’s storage that’s been the greatest beneficiary, as shown in the chart below. Bandwidth figures are unidirectional instead of bidirectional, which is why values are half of what they are in the chart above.

From 2004 – 2022, main memory bandwidth increased by ~12x, while PCIe bandwidth grew by 16x. Consumer storage bandwidth, on the other hand, has risen by approximately 94x over the last 18 years. If you remember the days when faster storage performance was defined by onboard 8MB caches, 7200 RPM spindle speeds, and NCQ support, this is pretty heady stuff.

These improvements in storage bandwidth are why Sony and Microsoft are both focused on using fast PCIe storage as memory with their latest console launches instead of dramatically increasing the total available system RAM. Microsoft’s DirectStorage standard will extend these capabilities to Windows PCs as well. Windows systems may ship entirely with SSDs in the future (this does not mean that Windows would not install to a hard drive, only that hard drives would not ship as boot drives in Windows systems). We have long since reached the point where even a modest eMMC storage solution can outpace the performance of a hard drive.

We have also reached the point at which PC storage bandwidth is rivaling main memory bandwidth from 20 years ago. Bandwidth, of course, is just one aspect of a memory technology and the access latencies on NAND accessed via the PCIe bus are several orders of magnitude higher than what 2004-era DRAM could deliver, but it’s still an achievement that companies can leverage to improve overall system performance. A system is only as strong as its weakest chain, and HDDs were always the weakest link in PC performance. The shift to NAND has unlocked PC performance that was previously gate-kept by spinning media.

I do not know enough low level details to speculate on how operating systems and file systems might be improved if they were designed for an SSD first and foremost instead of for spinning media, but I suspect we’ll start to find out over the next decade. The encouraging thing about the continued development of these interconnect standards is that consumer devices should continue to benefit, even at the low end. The M2’s storage might be only half the speed of the M1 (and I understand why that could irk some buyers), but the half-speed storage of the M2 MacBook is literally faster than racks of hard drives in the pre-SSD era.

The PCI-SIG is making up for lost time by firing new standard versions, one right after the other. Our dates of 2024 and 2026 for adoption are speculative at this juncture, but we’d expect both in-market by 2025 / 2028 at the outside. Thus far, SSD vendors have been able to take advantage of the additional bandwidth unlocked by new storage standards almost as soon as those standards reach market. This is in stark contrast to GPUs, which typically show no launch performance difference at all between a new version of PCIe and the immediately previous version of the standard.

We can collectively expect PC storage to keep getting faster — and to reap the long-term benefits of that performance increase.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/lT9Ck7z

الثلاثاء، 28 يونيو 2022

NEWS TECHNOLOGIE

(Photo: PCMag)
Time to break out the violins folks, because the world’s largest motherboard manufacturers have a tearjerker of a tale to tell. Both Asus and Gigabyte are projecting significantly lower motherboard shipments for the rest of 2022. In fact, the companies will likely ship 25 percent fewer mainboards this year than in 2021. The big disruptor here isn’t the chip shortage. It’s due in part to the fact they can no longer bundle motherboards with formerly-hard-to-find GPUs. Now that the GPU crisis is effectively over, nobody in their right mind would pay for a bundle like that if they only wanted the GPU. The same goes for pre-built PCs as well.

This unnecessary bundling was what fueled things like the Newegg shuffle last year. People would buy the bundle, then sell the motherboard, or monitor, or AIO. Some people even went so far as to buy an entire system. Then they would take out the GPU, motherboard and CPU, and sell the rest. We won’t name names here, but your humble author is very close to someone who did exactly this. Cough. The report says Asus shipped 18 million motherboards in 2021, but projects it’ll only ship 14 million this year. Gigabyte is expecting a similar decline, going from 13 million last year to 9.5 million for 2022. The report notes these two companies control over 70 percent of the global motherboard market.

Some of the ridiculous combos Newegg used to sell. (Image: @Ryugtx)

Overall, the entire motherboard industry will likely ship 10 million fewer motherboards this year. That’s according to a summary of the paywalled article posted by Tom’s Hardware. Digitimes sources say even something as exciting as the upcoming CPUs from Intel and AMD won’t be enough to change this outlook. That’s despite the fact that a lot of gamers are eagerly anticipating both Zen 4 and Raptor Lake. The article says the only forces that could boost sales would be the return of crypto mining, the end of the Russia/Ukraine war, or if inflation eases up.

Right now, it doesn’t seem like crypto mining will ever go back to what it once was. We know, famous last words. However, people have coined the term “Crypto Winter” to describe the currently bleak situation, suggesting it will be with us for some time. Also, it seems doubtful that miners who have thrown in the towel and also lost money will be eager to touch that stove a second time. Inflation is showing no signs of easing up either, and the same goes for the conflict in Ukraine. All that is to say it looks like we will be back to normal as far as components go, for the foreseeable future. To clarify, we don’t delight in any earnest endeavor experiencing a difficult time in business. However, those bundles were pure bull pucky, to be charitable. Hopefully as the market returns to normal, those types of bundles won’t darken our inbox again.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/4zBiSfW

NEWS TECHNOLOGIE

Right now the sweet spot for price and performance with SSDs is 1TB, as these drives are fast and affordable. That’s clearly not enough storage for some people though. (Definitely not – Ed) To answer the needs of those with projects as large as their bank accounts, Sabrent has announced the Rocket 4 Plus Destroyer 2. It’s the second iteration of the company’s add-in PCIe hardware RAID solution. It packs hold up to eight SSDs, including eight 8TB Sabrent Rocket 4 drives. This configuration allows it to offer 64TB of stupid-fast NVME storage. Although it can be a bootable device, it’s not recommended for obvious reason. Instead, it should be used for large projects demanding plentiful sequential throughput.

The Destroyer 2 is a collaboration with Highpoint, as it uses a Highpoint SD7540 PCIe 4.0 x16 RAID card along with its own SSDs. There’s also a Broadcom PCIe 4.0 8 series PEX switch that manages the eight M.2 drives. The entire apparatus is naked in photos, but it includes a full length aluminum heatsink with thermal pads and active cooling. Despite the cooler, it’s still a single-slot PCIe add-in card. Though we’re not sure how long it is, it is reported to be the same length as a high-end GPU. Those are typically around 12 inches or so, depending on the model. The Destroyer 2 supports M.2 SSDs in the following lengths: 2242, 2260, and 2280. It goes into a PCIe 4.0 x16 slot and requires a six-pin PCIe power connector for the SSDs.

(Image: Sabrent)

Since higher-capacity drives typically deliver better performance due to increased parallelism, you will probably get the best results with Sabrent’s 8TB drive. Still, that’s a $1,999 SSD, (currently only $1,499 on Amazon). Eight of them would set you back $12,000, so hopefully your boss won’t mind seeing that on an expense report. When this puppy is fully loaded it can hit over 28GB/s of sequential throughput. That is roughly four times the maximum possible with PCIe 4.0, which can hit 7GB/s. Interestingly the previous Destroyer had similar benchmarks, so we’re not sure what’s different this time. Tom’s Hardware notes that it offers “slightly better performance,” but it’s not quantified. It is clarified that you can use PCIe 3.0 drives too, so PCIe 4.0 is not required. However, for such an investment it seems shortsighted to use slower drives.

The most unique feature of the Destroyer 2 is that once you’ve formatted the array, you can drop the card in any system and it’s ready to go to work. The formatting and RAID creation is done with Highpoint’s browser-based RAID management software. It’s capable of various RAID levels including all the usual suspects, including RAID 10. It’s not clear what the Destroyer 2 will cost, and what the various configurations will be. Hopefully Sabrent releases more information shortly, as it could make a fun top-tier upgrade for those tired of waiting for PCIe 5.0 SSDs.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3VvqJCX

الاثنين، 27 يونيو 2022

NEWS TECHNOLOGIE

A rendering of Intel's future facility, which if you squint, kind of looks like a circuit board. (Image: Intel)

Intel was planning on holding a ceremonial groundbreaking event at its new Ohio fab in July. Now, those plans are on hold. The July 22nd event has been shelved, and Intel says it’s up to Congress whether it will ever be held at all. Intel is calling on the nation’s legislative bodies to pass the CHIPS Act. This would release funds Intel was planning on receiving to help shore up the cost of building out the new megafab near Columbus.

The crux of the dispute is over funding. The CHIPS Act was first passed by the Senate in the summer of 2021. Since then, it’s been stuck in legislative gridlock. The House of Representatives appropriated $52 billion for it in February. However, since then the two chambers have been unable to join their bills together and pass the sucker. The Washington Post reports lawmakers have been delayed by arguing about matters unrelated to silicon manufacturing. These issues include climate policies, trade with China, and more recent matters such as funding for the war in Ukraine. The clock is ticking though, as legislators famously leave town in August for a month-long vacation. When they return, they are likely to once again be distracted by the upcoming midterm elections.

A 3D rendering of Intel’s Ohio “megafab.” (Image: Intel)

An Intel spokesman emailed a comment to the Post about the situation. “As we said in our January announcement [announcing the Ohio location], the scope and pace of our expansion in Ohio will depend heavily on funding from the CHIPS Act. Unfortunately, CHIPS Act funding has moved more slowly than we expected and we still don’t know when it will get done. It is time for Congress to act so we can move forward at the speed and scale we have long envisioned for Ohio and our other projects.” The Post noted that Global Foundries expressed similar frustrations as it’s also expanding a facility in New York.

Intel has already committed to spending $20 billion on the facility, but the final cost might reach as high as $100 billion. Intel says its plans may change if it doesn’t receive the subsidies it was hoping to get from Uncle Sam. It’s also the recipient of a generous $2 billion incentive package from the state of Ohio. Intel says the site will eventually provide 3,000 jobs for Intel workers. During its construction, it will create 7,000 jobs too. Intel also clarified that the funding delay will only impact its plans for the Ohio fab. It will have no impact on its planned expansion at its fab in Chandler, AZ.

Intel isn’t the only company waiting for congress to get its act together. Last week over 100 CEOs from tech companies such as Microsoft and Google sent a letter to congress demanding action on the legislation. “The rest of the world is not waiting for the U.S. to act. Our global competitors are investing in their industry, their workers, and their economies, and it is imperative that Congress act to enhance U.S. competitiveness,” the letter reads.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/TFrR5Dp

NEWS TECHNOLOGIE

Large mining rigs can contain dozens of GPUs, and not one of them is being used for fragging noobs.

Crypto currencies are taking a beating right now. All the big coins are way, way down in price from where they were two years ago, while electricity prices have risen at the same time. For a lot of people, this means that the math simply doesn’t work anymore when it comes to using GPUs to mine crypto. Right now it’s cheaper to just buy crypto, since it’s so “affordable.” As we head into summer, there’s no relief in sight for anyone’s electricity bills. This confluence of events has created something PC gamers have been waiting for since 2019 or so: the mass GPU selloff. Obviously, a lot of these GPUs have been ridden hard and put away wet. But that doesn’t mean you can discount them altogether. Still, there’s some things to consider when purchasing a used GPU.

First off, I don’t think GPUs that have been used for mining should be automatically avoided. There’s a lot of variables at play in that environment, after all. Most, if not all, miners undervolt and under clock their GPUs to reduce power consumption and heat, which is a good thing for those looking to scoop them up on the cheap. Since nobody would want a card that’s been highly overclocked for a long time, this is the opposite if that. However, some coins like Ethereum require a lot of memory bandwidth, so a lot of miners usually overclock there instead. This obviously leads to a lot of heat, and Nvidia’s RTX 30-series cards are notorious for having memory that runs hot already. Overclocked memory combined with the lack of spacing between GPUs in a mining rack could mean the card in question lived its life in a torture chamber.

The crypto crash has resulted in some tempting deals for used GPUs. (Image: eBay)

Consumer level video cards are also not tested or validated for 24/365 operation. Some of the cards hitting the market could have been running nonstop for years, if not longer. Still, this doesn’t necessarily mean they are damaged goods. As long as they were running within spec, and weren’t overclocked the whole time, they might be totally fine. The best way to find out if a card you’re looking at buying was used for mining is to just ask the seller for details. What were the clock and memory speeds? What was the cooling setup? If they don’t have that information, or pretend to not know what it means, we’d shop elsewhere. Also, glancing through eBay listings we didn’t find a single listing that mentioned mining, so it seems sellers are being cagey about it. In our opinion, if that information isn’t disclosed as part of the transaction, we wouldn’t go through with it.

Also be wary of posts that only show a stock photo of the GPU or the box it comes with. Personally, I’d buy local if you live in a city big enough to offer such things. That way you can hold the card in your hands before buying and at least inspect it. Generally, I’d be wary of eBay, especially when there’s better options on Reddit. For example, there’s /r/minerswap and /r/hardwareswap that are currently full of good deals on GPUs. You can also browse the poster’s history to see if they’ve been involved in mining operations.

However, bottom line; GPUs are now close to MSRP due to the sudden imbalance in supply and demand in the GPU space. That means you can get a brand new RTX 3080 for $800 or so, with a generous warranty. In our opinion, if your budget is at least a little bit flexible, we’d just pay more for the security of knowing our GPU will run perfectly for years.

There’s also the matter of timing. It’s unfortunate the GPU gold rush has occurred just as we enter the twilight of the current generation of GPUs. Both Nvidia and AMD have next-gen GPUs in the hopper currently. Plus, rumors indicate they might offer twice the performance of the current gen cards. This makes it a horrible time to upgrade. Still, if you never upgraded before and are limping along on a GTX 960 or something, there’s some killer deals out there. Just be sure to practice due diligence before you pull the trigger.

Now Read:

 



from ExtremeTechExtremeTech https://ift.tt/8OZhidq

NEWS TECHNOLOGIE

There are billions of Android devices in the world, and that makes it a target. So, online fraudsters and scammers constantly create malware in an attempt to infiltrate the Android OS. Some of the more nasty malware can definitely, 100 percent wreck your phone. The reporting on these threats is base don fact, but they can overstate the real risks of picking up a piece of malware, and the definition of malware can be quite vague. Security firms are usually pushing a virus scanning app of some sort, but Android is by its very nature more secure than a desktop computer. Odds are, you don’t need to pile on security apps because you’ve probably already got what you need.

The Scare Tactics

In a 2019 report from AV-Comparatives, we learned that most of the antivirus apps on Android don’t even do anything to check apps for malicious behavior. They just use white/blacklists to flag apps, which is ineffective and makes them little more than advertising platforms with some fake buttons. Shocking and upsetting, right? They can get away with it because true Android viruses that take over your device are not as common as you’d expect. “Malware” can encompass milder threats like apps that harvest personal information or trigger pop-up ads. You still want to avoid those, of course, but malware scanners aren’t going to help apps that simply abuse the established Android permission architecture.

Android and other mobile platforms have their roots in the modern era when programmers understood the dangers of the internet. We’ve all been conditioned what to expect by PC malware, which can sneak onto your system simply because you visited the wrong website with a vulnerable browser. These “drive-by downloads” aren’t feasible on Android without a pre-existing infection. On Android, you have to physically tap on a notification to install an APK downloaded from a source outside the Play Store. Even then, there are security settings that need to be manually bypassed. That’s not to say it’s impossible for Android to have a severe zero-day bug that allows someone to sneak apps don’t your phone, but that would have to be an extremely delicate, costly operation. Unless you have high-level security clearance or a zillion dollars worth of cryptocurrency, it’s unlikely anyone would bother with such a scheme.

So, what about malware on the Play Store? Again, that depends on what you mean by malware. The most severe security risks will never make it into the store — Google’s platform has the ability to scan for known malware when it’s uploaded. There’s also a human review process in place for anything that looks even a little bit questionable. You might occasionally hear about some “malware” apps in the Play Store, usually related to information harvesting or advertising shenanigans. Google deals with these quickly, but anti-malware apps won’t catch this sort of thing.

The solution pushed by AV companies is to install a security suite that manually scans every app, monitors your Web traffic, and so on. These apps tend to be a drain on resources and are generally annoying with plentiful notifications and pop-ups. You probably don’t need to install Lookout, AVG, Norton, or any of the other AV apps on Android. Instead, there are some completely reasonable steps you can take that won’t drag down your phone.

What You Should Do to Stay Safe

Your phone already has antivirus protection built-in. Your first line of defense is simply to not mess around with Android’s default security settings. To get Google certification, each and every phone and tablet comes with “Unknown sources” disabled in the security settings. If you want to sideload an APK downloaded from outside Google Play, your phone will prompt you to enable that feature for the originating app. Leaving this disabled keeps you safe from virtually all Android malware because there’s almost none of it in the Play Store.

There are legitimate reasons to allow unknown sources, though. For example, Amazon’s Appstore client sideloads the apps and games you buy, and some reputable sites re-host official app updates that are rolling out in stages so you don’t have to wait your turn. Along with the Play Store, you also have Google Play Protect, which scans your apps for malicious activity. Updates to Play Protect roll out via Play Services, so you don’t need system updates to remain protected. In the best case, installing a third-party AV app just duplicates the work of Play Protect.

Users have been rooting their Android phones ever since the first handsets hit the market, but it’s less common these days. The platform offers many of the features people used to root in order to acquire. Using rooted Android is basically like running a computer in administrator mode. While it’s possible to run a rooted phone safely, it’s definitely a security risk. Some exploits and malware need root access to function and are otherwise harmless even if you do somehow install them without root. If you don’t have a good reason to root your phone or tablet, just don’t open yourself up to that possibility.

Another thing you can do is pay attention to app permissions. Some Android apps may not be “malware” per se, but they still snoop through your data. Most people don’t read the permissions for the apps they install, but the Play Store does make all that information available. As of Android 6.0 and later, apps need to request access to sensitive permissions like access to your contacts, local storage, microphone, camera, and location tracking. If an app has reason to access these modules (like a social networking app), you’re probably fine. If, however, a flashlight app is asking for your contact list, you might want to think again. The system settings include tools to manually revoke permissions for any app. Android will even alert you if an app started requesting your location in the background so you can disable it.

It really just takes a tiny bit of common sense to avoid Android malware. If you do nothing else, keeping your downloads limited to the Play Store will keep you safe from almost all threats out there. The antivirus apps are at best redundant and at worst a detriment to your system performance.

Now read:



from ExtremeTechExtremeTech https://ift.tt/M3xTGVq