الثلاثاء، 31 مايو 2022

NEWS TECHNOLOGIE

Microsoft is putting a lot of energy into promoting its Game Pass subscription services. While it’s not included in the base subscription, Microsoft’s Xbox Cloud Gaming (formerly xCloud) could be its big play for the future, but that future might be a little further off than we thought. Microsoft confirms it was working on a streaming dongle code-named Keystone, but it has decided to scrap that project and start over

Microsoft’s Game Pass offerings are a bit convoluted, spanning both PC and Xbox with platform-specific subscriptions and an all-in sub that adds cloud gaming. That $15 per month Ultimate plan lets you download a selection of titles on your local gaming hardware (PC or Xbox), but you can also stream a subset of games to a browser, mobile device, Xbox, or PC. The addition of a streaming dongle a la the Chromecast to expand compatibility seems like an easy win, but Microsoft isn’t so sure. 

Keystone would have been a small HDMI-equipped device that could bring Xbox Cloud Gaming to any TV or monitor. According to Microsoft, it will take what it learned making Keystone and apply that to new game streaming hardware. However, it doesn’t have any timelines or even vague concepts to share right now. Microsoft is starting from scratch. 

When Google announced Stadia, the availability of inexpensive Chromecast devices was cited as a significant advantage, but Stadia hasn’t exactly been lighting up the internet. Developers seem mostly uninterested in porting AAA games to Google’s platform. It’s possible that Microsoft only felt it needed the streaming stick, which it pre-announced in 2021, as a foil to Google’s streaming platform. Xbox Cloud Gaming has some major advantages over Google, even in what most consider a dry spell for Game Pass

Microsoft is probably feeling much more secure with its cloud gaming prowess right now. Unlike Google, Microsoft has a raft of first-party titles from the studios it has gobbled up in recent years, like Bethesda and Activision-Blizzard. It promises all Microsoft Game Studio titles will come to Game Pass on day-one, although they might not all be available to stream right away. 

We don’t know what form Microsoft’s streaming explorations will take. The company is committed to boosting Game Pass subscription numbers, but is it so committed that it will create a dongle that competes with the Xbox? Sony is set to roll out its updated PS Plus service, which supports downloadable and streaming games, but it will only work on Sony consoles and PC. Without a strong challenge from Google, there’s little reason for Microsoft to make a cheap streaming dongle.

Now read:



from ExtremeTechExtremeTech https://ift.tt/oyxDf8g

NEWS TECHNOLOGIE

(Photo: Northwestern University)
Engineers at Northwestern University have created micro-robots that mimic the peekytoe crab—but on an almost unbelievably smaller scale. 

The half-millimeter “crabs” constitute the world’s smallest remote-control robots. Smaller than a flea, they’re able to walk along the edge of a penny or thread a sewing needle. Despite (or perhaps because of) their size, the micro-robots are able to “bend, twist, crawl, walk, turn and even jump,” giving researchers hope that tiny robots may someday be able to perform tasks for humans in highly-constrained environments. 

“You might imagine micro-robots as agents to repair or assemble small structures or machines in industry or as surgical assistants to clear clogged arteries, to stop internal bleeding or to eliminate cancerous tumors—all in minimally invasive procedures,” said bioelectronics engineer John A. Rogers in a Northwestern University statement. Rogers and his colleague Yonggang Huang, a mechanical engineer, conducted their research in experimental and theoretical halves. The product of their work has since been published in the journal Science Robotics.  

If you’re wondering how Rogers and Huang packed complicated hardware into such a tiny structure, you’d be right to ask—because they didn’t. The micro-robots are made up of a shape-memory alloy that, when heated, returns to a “default” shape. A scanned laser beam quickly heats the micro-robot at multiple points throughout its body, while a thin glass coating helps it to return to its deformed shape. This rapid back-and-forth allows the micro-robot to move from one location to another, covering a distance equal to half its body length per second. The robot crab also moves in whichever direction the laser is scanned toward; if the operator points the laser to the right, the micro-robot travels right. 

This isn’t the first time Rogers and Huang have teamed up to engineer tiny tech. Less than a year ago, the duo unveiled the world’s smallest flying structure, a winged microchip about the size of an ant’s head. Before that, they worked with a team of biomedical researchers to create small bioresorbable cardiac pacemakers that can be left in the body to disintegrate after their temporary purposes are fulfilled. 

And because experimental engineering can in fact contain a bit of levity, the engineers chose to model their micro-robots off of crabs just because they were amused by the resulting movement. They also found themselves capable of producing micro-robots that looked and behaved like inchworms, beetles, and crickets, but it was the crabs they found funny and inspiring. “It was a creative whim,” Rogers said.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/RukeMjC

NEWS TECHNOLOGIE

Sony says the PS5 is the preferred way to play Cyberpunk. Good luck finding one.

After more than a year and a half, it’s still almost impossible to purchase a new game console for retail price. Sure, there’s supply if you don’t mind a 50-75 percent markup, but everyone should mind that. Thankfully, Sony might be riding to the rescue soon. The PlayStation 5 maker promises it’s going to ramp up production to unprecedented levels. We’ll believe it when we see it, but it’s still encouraging to hear. 

The PlayStation 5 and Xbox One X launched at the tail end of 2020, right in the midst of a historic shortage of components and a once-in-a-century viral pandemic that kept people at home and bored. It was possibly the worst time to go looking for an expensive new piece of gaming hardware. Predictably, scalpers managed to collect all the inventory and resell it at inflated rates. 

Retailers have implemented some policies to slow down resellers, but it’s still hard to find a PS5 in stock that isn’t marked up to $800 or $900 from the $500 MSRP. In a briefing with investors, Sony Interactive Entertainment President and CEO Jim Ryan pledged to increase supply of the consoles. It’s easy to see why investors would want that assurance. Sony only sold two million consoles in the first quarter of 2022, a significant decline from the previous quarter, but demand has not slumped. Sony says it can sell 80,000 PS5s in just 82 minutes, whereas it would have taken nine days to move than many PS4s at the 18-month mark. By not having more units, Sony is leaving a ton of money on the table. 

(Photo: Onur Binay/Unsplash)

Apparently, Sony believes it can boost production to a level that would allow the PS5 to surpass the total number of PS4 units in 2024. At that point in its life, the PS4 has moved between 40 and 50 million units, but the PS5 is currently hovering just under 20 million total. Sony will get there by working with more suppliers to guarantee access to components, but the rest of Ryan’s claims were sufficiently vague to be meaningless. 

Until Sony can boost production numbers, gamers may continue playing on older hardware. At this point in the PS4 era, there were just 36 million PS3 players. Currently, there are 84 million still using the PS4, and that number won’t sink much until you can swing by your local retailer (or Amazon listing) and buy the PS5 for the real price.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/9u0gQae

NEWS TECHNOLOGIE

(Photo: Denise Jones/Unsplash)
Despite how much we already know about the tragic volcanic eruption that occurred almost 2,000 years ago in Pompeii, there’s still a lot to be discovered about the people who lived there. Scientists have made a major stride in this area by sequencing a complete genome of a man who died that fateful day. 

The eruption at Pompeii infamously encased the city in pyroclastic flows, creating an eerie snapshot of its residents’ final moments. Because the ash and volcanic debris preserved everything from human bodies to food, scientists have been able to turn the city—just a few miles southeast of Naples, Italy—into an archaeological site that offers glimpses of life in 79 CE. One such glimpse can be found at the House of the Craftsman, a small structure in which the remains of two ash-engulfed humans were first found nearly a century ago.  

The remains belonged to one 50-year-old woman and one man believed to be between 35 and 40 years old. Dr. Serena Viva, an anthropology professor at Italy’s University of Salento, worked with geogeneticists to extract DNA from both skeletons. But according to a report published in the journal Nature, the team was unable to obtain quality information from the woman’s DNA, leaving them capable of analyzing only the man’s DNA. 

Researchers believe the pair experienced a quick death as a cloud of superheated ash overtook the home. Their positions suggest they did not attempt to escape. (Photo: Notizie degli Scavi di Antichità, 1934)

A small amount of bone taken from the base of the man’s skull provided enough intact DNA for the researchers to sequence a complete genome. His genome revealed that while he shared genetic similarities with other people who lived in Italy during the Roman Imperial age, he also possessed genes typical of individuals from Sardinia, an island off of Italy’s eastern coast. This tells the researchers that the Italian Peninsula may have harbored more genetic diversity than originally thought.  

The man’s remains also contained ancient DNA from Mycobacterium tuberculosis, a bacterial disease that primarily affects the lungs. Because a few of the man’s vertebrae showed signs of disease, Dr. Viva’s team believes he was suffering from the disease prior to the eruption. 

Thanks to the effectiveness of modern sequencing machines—and the success of this study—researchers are likely to continue analyzing DNA from preserved remains at Pompeii and other archaeological sites, like Herculaneum (which was also engulfed in volcanic ash). “Our initial findings provide a foundation to promote an intensive analysis of well-preserved Pompeian individuals,” the study reads. “Supported by the enormous amount of archaeological information that has been collected in the past century for the city of Pompeii, their paleogenetic analyses will help us to reconstruct the lifestyle of this fascinating population of the Imperial Roman period.”

Now Read:



from ExtremeTechExtremeTech https://ift.tt/gaEXAfD

NEWS TECHNOLOGIE

Bennu, as seen by OSIRIS-REx.

We’ve been hearing about asteroid mining for years, and while it wasn’t crazy to speculate on the possibility, there were plenty of barriers. However, humanity has recently studied asteroids up close, landed on them, and even shot one with a high-speed projectile. The day may be coming when asteroid mining will be viable, and a startup called AstroForge aims to be the first. This newly founded company has announced its plans to begin mining asteroids for rare metals, and it already has a test mission planned. 

Scientists estimate that even a small asteroid could hold billions of dollars worth of precious metals and other resources, but the problem is getting that material to Earth without breaking the bank. Past efforts to mine asteroids have made water their initial focus. With a supply of water, you can split it into oxygen and hydrogen for fuel. However, AstroForge is going right for the shiny stuff, saying there’s no market for fuel depots in space, and vessels like the upcoming Starship could potentially heft enough water into orbit that it’s not worth collecting it from asteroids. 

AstroForge intends to focus its efforts on resources that are in high demand here on Earth: platinum-group metals like osmium, iridium, palladium, and of course, platinum. Mining these materials on Earth is a dirty business, taking up large swaths of land and producing extensive pollution. The US, where AstroForge is based, is also not blessed with large deposits of platinum-group metals, so having an extraterrestrial source of these materials could be a boon to national security, AstroForge CEO Matt Gialich recently told Space.com

NASA’s OSIRIS-REx is believed to carry about 2kg of asteroid regolith, but the mission comes with an $800 million price tag.

The company claims to have developed a “lab tested” technology to process asteroid material in space so it can be returned to Earth. It has raised $13 million to fund its operations, including a flight on a SpaceX Falcon 9 rocket to test the tech in orbit. However, reaching an asteroid to mine it could be the real problem. 

So far, space agencies like NASA and JAXA have managed to get a few robotic probes to nearby asteroids. But “nearby” still means millions of miles. It takes years just to reach the target, and a return trip only adds to the expense. JAXA spent about $150 million on the Hayabusa2 mission to collect 5.4 grams of surface material from the asteroid Ryugu. It dropped the payload back home in 2020. Meanwhile, NASA’s OSIRIS-REx mission recently scooped up an estimated two kilograms of asteroid regolith, but this one cost about $800 million so far — the sample won’t be back on Earth until 2023. Unless AstroForge’s mining technology is truly revolutionary, the economics of asteroid mining is still very questionable. Hopefully, we get an update after the upcoming test flight.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/irIQ14O

الجمعة، 27 مايو 2022

NEWS TECHNOLOGIE

Earlier this month, AMD offered us the chance to preview FidelityFX Super Resolution (FSR) 2.0, courtesy of the game Deathloop. Deathloop is currently the only title to support both versions of FSR as well as Nvidia’s Deep Learning Super Sampling (DLSS), making this an excellent opportunity to take them both out for a collective spin.

Despite the name, FSR 2.0 is not an update to FSR 1.0. It’s an entirely new approach that’s designed to bypass some of FSR 1.0’s weaknesses. AMD’s first attempt at this kind of upscaling was based entirely on spatial upscaling and did not use temporal information. It also required a high-quality anti-aliasing method to work properly, which many games don’t support. Deathloop lacks this support, and FSR 1.0 in the game isn’t much to write home about.

AMD is more confident in FidelityFX Super Resolution 2.0 than it ever was in FSR 1.0. FSR 1.0 was positioned as an alternative feature along the lines of Radeon Image Sharpening, while FSR 2.0 is positioned as more of a direct competitor to DLSS.

FSR 2.0 incorporates the temporal data that FSR 1.0 lacks and it doesn’t require a game to support high-quality antialiasing in order to render acceptable output. AMD previewed it back in March, but this is the first time we’ve gotten to play with it. We’ve taken FSR 2.0 out for a spin against DLSS on a 1440p panel to capture the rendering differences.

One thing to note before we get started. The small gray spots you see on some images are not errors introduced by any of these visual settings. They’re transient phenomena. There’s one place where FSR and DLSS both introduce errors in our comparison and we’ll call it out when we get to it.

How to Compare DLSS and FSR in This Article

This article contains a mixture of directly embedded images as well as links out to Imgsli. Imgsli is an excellent free method for comparing two or more images in an A/B(C) method. Because of the number of comparison points, we’re going to queue comparisons in DLSS, FSR 2.0, and then directly head-to-head for our selected scenes. You can select which image you compare in Imgsli using the drop-down menu at the top of each image.

Nvidia DLSS: Beach

Let’s get started. Deathloop begins with your own murder, after which you wake up on a deserted beach. Here’s the zoomed-out, native 1440p version of the image with no AA or alternative upsampling:

Nvidia Native versus DLSS. Adjustable image comparison available at Imgsli.com

But zoomed out doesn’t give us the best view of what changes in each scene. It’s actually hard to tell what’s different across these frames. (From Nvidia’s perspective, that’s a good thing). A 600 percent zoom is a much better way to see rendering subtleties.

Nvidia DLSS quality comparison. Close-up of shot above. User-adjustable image available on Imgsli.

Quality DLSS settings substantially reduce jaggies compared to the 2560×1440 native resolution. This is expected — DLSS performs antialiasing, in addition to its other functions — but the difference is large. Shifting down to “Balanced” hardly impacts image quality at all. One downside to DLSS (and this is present in every mode) is that ground textures are a bit blurred compared to the native image. This is really only visible at tight zoom, however.

AMD FSR 2.0: Beach

According to AMD, FSR 2.0 is better than FSR 1.0 at every quality level. We focused our testing on FSR 2.0 for these AMD comparisons, but include some FSR 1.0 shots as well, to show the degree of uplift.

AMD native 1440p versus FSR 2.0 versus FSR 1.0. User-comparable results available on Imgsli.

The improvement from FSR 1.0 to FSR 2.0 is immediately obvious. FSR 1.0 blurs content heavily and the line leading away from the pole is a vague smear. With FSR 2.0, it resolves into a distinct line. Switch to “Performance” for both tests, and you’ll immediately see how much better FSR 2.0 is compared to FSR 1.0. AMD claimed that every FSR 2.0 quality setting was better than every FSR 1.0 quality setting, but this comparison shows AMD is actually underselling its own feature. Even FSR 2.0’s “Performance” setting is better than FSR 1.0’s “Quality,” though Deathloop isn’t considered a great test case for Fidelity FX Super Resolution’s first iteration.

AMD 1440p native versus FSR 2.0 and FSR 1.0. User-adjustable comparison available at Imgsli.

FSR versus DLSS: Beach

When it comes to FSR 2.0 versus DLSS, FSR 2.0 wins the comparison in this set of images. Note: We’ve combined the standard shot and closeups in this comparison to try to keep the amount of clickable material to some kind of reasonable limit.

AMD FSR 2.0 versus Nvidia DLSS. User-adjustable comparison available at Imgsli.

FSR 2.0 is much less blurry than DLSS, at every detail setting. We’ve included both the zoomed-out and zoomed-in shots to illustrate the distinction in both modes. FSR 2.0’s “Balanced” preset offers better image quality than DLSS’ “Quality” preset. One thing we do encourage you to keep in mind is that the relative quality of DLSS and FSR can vary considerably depending on the suitability of the game engine for the format and the amount of work invested by the developer. These comparisons might play out differently in another title.

The gains from FSR 1.0 Ultra Quality to FSR 2.0’s “Quality” mode are quite impressive. Even at top quality, FSR 1.0 struggled to distinguish the wire strung up at the pole from background clutter, and lower-quality versions of the feature all but lose the strand. One of AMD’s promises for FidelityFX Super Resolution 2.0 was that the feature’s “Balanced” mode would be better than FSR 1.0’s “Ultra Quality” mode. In some ways, FSR 2.0’s “Performance” mod is better than UQ FSR 1.0, though we wouldn’t actually recommend using Performance mode.

Nvidia DLSS: Bunker

Let’s move from the beach to the interior of the starting area and check out a nearby underground bunker.

Nvidia native rendering versus DLSS. User-adjustable image comparison via Imgsli.

Nvidia DLSS and AMD’s FidelityFX Super Resolution both create a weird textured problem on the ground in this scene. You might not know this was an accident if you didn’t look closely at other rendering modes — while it’s a bit odd looking, the texture doesn’t flicker or change dramatically as one moves around the room. Light across the scene is a fair bit different between 1440p with no DLSS and DLSS engaged, but you can see how DLSS prevents horizontal line shimmer where there are lines across surfaces.

Apart from the introduced error, I consider DLSS Quality to improve the overall image (and FSR also creates the same error). DLSS Balanced, on the other hand, not so much. It’s not that DLSS Balanced doesn’t have any advantages over native 1440p, but there are trade-offs as well, especially considering the damage. We’ll look at a few of these when we zoom in. Temporal AA is the best quality of all, if only because there’s no error on the ground.

Our bunker close-up shot focuses on the map board at the back of the room. It’s striking how bad the default native rendering is. From our vantage point in front of the orange tarp, the close-up native line isn’t actually a solid line of string at all, but a series of dashes. DLSS Quality fixes both the dashes and the detailing on the metal box to the left of the map. DLSS Balanced and DLSS Quality are quite similar here.

Nvidia native rendering versus DLSS, bunker close-up. User-adjustable image comparison available on Imgsli.

Interestingly, Temporal AA is worse on this map closeup than the other settings, even if it looks better in the scene as a whole. Line weights and handwritten text on notes pinned to the board are both stronger with DLSS. Temporal AA manages to beat native, but the setting does not impress here.

AMD FSR: Bunker

The bunker on an AMD GPU has the same visual problem that DLSS has. Both DLSS and FSR change how shiny certain surfaces are, and how reflective they look. It’s not a bad thing, but it does stand out as a difference between enabling and disabling these technologies, even if the floor wasn’t oddly textured. The problem, despite being quite visible, doesn’t really stand out in gameplay with FSR, either.

AMD FidelityFX Super Resolution versus native resolution. User-adjustable comparison available on Imgsli.

Pan back and forth in the image comparison above between native 1440p and DSR 2.0 Quality, and you may notice that one of the lockers in the back appears to lose a line that defines an upper compartment. Zoom in, and it’s easier to see that while the line wasn’t removed, it shimmers less and is less visible. You can also see that FSR 2.0 improves the string rendering on the wall map in the back, even without zooming. FSR 1.0 Ultra Quality looks somewhat worse than native 1440p with no AA technology enabled.

Comparison between native AMD rendering and FSR 2.0. User-adjustable comparison available on Imgsli.

Not much new to say here. Native looks bad on AMD as well, and FSR 2.0 is substantially better than FSR 1.0. I forgot to grab a “Balanced” screenshot for FidelityFX Super Resolution 2.0 for this one — my apologies. But this is an easy win for FSR 2.0, without much more to say about it.

FSR versus DLSS: Bunker

Nvidia DLSS versus AMD FSR 2.0. User-adjustable comparison available on Imgsli.

Both companies’ solutions create an error on the floor, so we’re going to call that a wash and compare on the basis of other characteristics. You may be hard-pressed to see much in the way of variance unless you zoom in, at which point some distinctions appear. Once again, FSR 2.0 is a slightly sharper solution while DLSS blurs just slightly more. Differences this small typically come down to personal preference — do you like a bit of blur to guard against shimmer and jagged lines, or do you prefer maximum detail?

DLSS versus FSR 2.0: Bunker Close-Up

Neither DLSS nor FSR 2.0 look fabulous in this close-up shot, but DLSS gets the nod from us for its ability to create slightly more legible text. Line strength is better with FSR 2.0 compared to Deep Learning Super Sampling, but we’d give the nod to Nvidia overall.

User-adjustable image comparison available via Imgsli.

Nvidia DLSS – Panel Close-Up

We’ve pivoted (literally) towards the console panel you can see above, to get some close-up shots and measure DLSS versus FSR at minimum range. We’ll start with the DLSS comparisons, though we’ve also chucked an Nvidia run of FidelityFX Super Resolution into the mix, just to see how an Nvidia card fares when using AMD’s older rendering method.

Nvidia Native 1440p versus DLSS Quality. User-adjustable image comparison available at Imgsli.

DLSS Quality looks quite similar to native resolution here. While there’s a slight blurring, it’s not very much. AA methods often create at least a small amount of blur, after all. Balanced quality is noticeably worse, however, with significant gauge blur and fine detail loss. Temporal AA deals with some bright jagged lines that DLSS Quality doesn’t and changes the overall lighting a bit. FSR 1.0 does a reasonable job cleaning up the image in some places, but it creates text distortion in the gauge readouts.

Nvidia running FSR 1.0 versus AMD. Image isn’t perfectly aligned due to the need to swap GPUs. User-adjustable image comparison available at Imgsli.

Here, the slight blurring from DLSS Quality is preferable to the increased jaggies in the FSR 1.0 image. FSR 1.0 isn’t really the point of this article, but we wanted at least one comparison between Nvidia and AMD on this point. While FSR 1.0 output isn’t literally identical between the two companies — AMD’s text on the panels is ever-so-slightly blurrier than Nvidia’s — the two are close enough to demonstrate equivalent support.

AMD FSR 2.0: Panel Close-Up

Here’s AMD’s close-up on the instrument panel, compared across native resolution, FSR 2.0, and FSR 1.0.

Native resolution versus FSR 2.0 versus FSR 2.0. User-adjustable comparison available at Imgsli.

FSR 2.0 really shines here. The panel is higher quality with less blurring with FidelityFX Super Resolution 2.0 enabled in Quality mode than it is in native 1440p, as shown below:

A zoomed-in comparison of the two images shown above.

FSR 2.0 improves AMD’s image quality over and above baseline. That’s a trick FidelityFX 1.0 can’t match.

AMD FSR 2.0 versus DLSS: Panel

AMD’s FSR 2.0 wins this comparison against DLSS. The sharper rendering DLSS 2.0 offers pays dividends here, rendering written text and gauge numbers easier to read compared to DLSS. DLSS, in turn, renders significantly better text than FSR 1.0. Both technologies perform well here and the gap between them isn’t huge.

DLSS vs. FSR. User-adjustable comparison available on Imgsli.

While we preferred DLSS for the background map and text in our previous comparisons, we like FSR 2.0 more for the panels and associated gauges.

Putting It All Together: Who Comes Out on Top?

Between DLSS and FSR 2.0, I narrowly prefer FSR 2.0. Honestly, it’s a wash at anything less than a painstaking comparison — it’s not as if you notice a fractional difference in text that’s too blurry to read when playing the game normally. Both technologies broadly deliver what they say they will — namely, a performance improvement even at the highest quality settings.

What matters more for AMD is matching Nvidia’s ability to field an image-enhancing algorithm that improves performance instead of hurting it. In that regard, FSR 2.0 succeeds tremendously.

Technologies like FSR 2.0 could be particularly helpful to mobile and low-power device gaming, especially on products like the Steam Deck. Tests show that technologies like DLSS and FSR can improve rendering performance by 20 – 40 percent depending on the title and your preferred settings. Improving performance this much typically requires buying a new GPU at a substantially higher price.

This shift has short-and-long term implications. Because FSR 2.0 requires RDNA2 support, unlike FSR 1.0, the number of people who can take advantage of this technology is small. Over time, however, this feature will be a mainstream capability in every GPU that AMD manufacturers. Intel will presumably follow suit. Once that happens, gamers can look forward to substantially better performance.

Long term, we expect Intel, Nvidia, and AMD to shift their efforts towards a mixture of AI and non-AI techniques intended to improve image quality without paying the penalty of rendering pixels at their native resolution. FSR 2.0 is an important step on that journey.

Now Read:

 



from ExtremeTechExtremeTech https://ift.tt/a6jmEu4

الخميس، 26 مايو 2022

NEWS TECHNOLOGIE

Back in the April AMD made news by saying it was “gonna try to make a big splash with overclocking” with its upcoming Zen 4 CPUs. That would be somewhat of a departure from Zen 3, as it’s not exactly known for its overclocking headroom. Now that Computex has come and gone, we’ve been able to see Zen 4 in action.Despite AMD’s statements, it’s not clear how much the overclocking situation has changed. While Zen 4 clearly allows for overclocking, Zen 3 has never impressed in this regard. Zen 4 may not, either.

As a refresher, at Computex AMD showed a prototype 16C/32T Ryzen 7000-series “Raphael” CPU running Ghostwire: Tokyo. A CPU clock speed monitor was running in the corner, so we could see its clock speeds. Although clocks fluctuated in the low 5GHz range during the demo, the chip did hit a notable peak of 5.5GHz. Dr. Su said this is normal, as it will hit variable clocks depending on the workload. We don’t know what the actual boost clock of the chip is, but it’s clearly higher than the 5950X’s 4.9GHz.

In an interview with PCWorld, AMD’s Robert Hallock confirmed nothing fancy was required to hit those clocks. He said they were using a standard 280mm AIO cooler you can buy online. This is a not-so-subtle reference to the time Intel was caught using a chiller to cool a 28-core desktop Xeon chip. Regardless, he said the CPU wasn’t overclocked, and that “most of the threads” were running at 5.5 GHz. This begs the question: if it can hit 5.5GHz on its own, how high can it go with an overclock?

There’s one additional thing to point out. AMD released info on its upcoming AM5 chipsets (above) and you’ll note it doesn’t list overclocking as a feature offered on B650 boards. AMD has clarified that, saying it will indeed allow overclocking. This means every motherboard in the stack will allow it, so it’s open season when AM5/Zen 4 launch this fall.

But what kind of results can we expect? Color us skeptical, but we’re still not expecting much. For example, the 5.5GHz AMD showed off is the current high water mark for 16 core CPUs. It’s the current single core boost clock of Intel’s binned Core i9-12900KS, after all. If AMD is allowing its 7000-series CPU to get to 5.5GHz on its own, right out of the box, it seems like going even further might be a fool’s errand. As we’ve stated before, if AMD could get it to run at 6GHz without fancy cooling, why limit it to 5.5GHz? Even if it’s rated for a single-core boost of 5.5GHz, and you get it up to 5.7GHz, that’s still less than a four percent single-core overclock.

Over the last five years, AMD has chosen to leave relatively little on the table for manual overclockers, preferring instead to ship CPUs that run quite close to their maximum possible frequencies out of the box. While it may still be possible to manually overclock an AMD Ryzen for performance gain, we’ve had far more luck cranking up clocks on high core count CPUs like the Ryzen Threadripper 3990X, where all-core overclocks of 300-500MHz are possible given sufficient cooling. In this kind of scenario, OCing can still pay dividends over and above stock clock — but Threadripper is a workstation platform and a workstation platform limited to artificially lower clocks at that.

For Zen 4 AMD has cranked up the power requirements by a significant amount, which will also allow it to raise clocks. It’s gone from 105W TDP on the 5950X to 170W TDP, with a maximum socket power of 230W. That’s a huge boost, and will give AMD some added flexibility. Still, it seems like the song will likely remain the same. The lion’s share of the benefits could ultimately come down to overclocking, but not on core clock speeds. Instead it’ll be left to overclocking memory and Infinity Fabric, just as it was on Zen 3. Even Robert Hallock himself has noted that’s where most of the gains have historically come from on AMD’s CPUs. This is seemingly confirmed by reports that AMD is focusing heavily on memory overclocking with Zen 4, via its new EXPO technology.

None of this is meant to be a slight to AMD, because as we’ve said before the world has changed when it comes to overclocking. For both AMD and Intel, the days where they could leave 20-30 percent of a CPU’s clock improvement (or more) on the cutting room floor are long gone. As transistor density increases and node sizes decrease, it’s becoming more difficult to achieve higher clock speeds while keeping thermals in-check. This has been the pattern for some time now, and there’s no reason to think that will suddenly change with Zen 4. Intel and AMD may make some limited carve-outs for overclockers, but we expect both companies to reserve the vast majority of their performance improvements for themselves.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/QvBOwD3

NEWS TECHNOLOGIE

(Photo: Andy Holmes/Unsplash)
Sometimes experiments don’t go as planned. Case in point: when some scientists set out on making hamsters more “peaceful” via gene editing, they accidentally made the fuzzy little rodents more aggressive instead. 

Neuroscientists at Georgia State University (GSU) wanted to see how vasopressin, a mammalian hormone, influenced social behavior. They used a relatively novel technology called CRISPR-Cas9—which allows scientists to edit organisms’ genomes—to suppress a vasopressin receptor in Syrian hamsters. The expectation was that preventing the hamsters’ bodies from utilizing vasopressin would result in calmer, more peaceful behavior—but the result was anything but. 

Instead, cutting vasopressin out of the hamsters’ system resulted in “high levels of aggression,” particularly toward hamsters of the same sex. Though “normal” male hamsters are notoriously more aggressive than female ones, the startling change occurred in both sexes.

(Photo: Henri Tomic/Wikimedia Commons)

Previous studies have suggested that more—not less—vasopressin correlates with higher levels of cooperability. In 2016, researchers at the California Institute of Technology, Pasadena found that administration of vasopressin in humans resulted in an increased tendency to “engage in mutually beneficial cooperation.” This aligns with even earlier research, which showed that vasopressin may be responsible for regulating social behaviors related to sexual expression and aggression. 

“This suggests a startling conclusion,” said H. Elliott Albers, a neuroscience professor and the leader of the study, in a GSU statement. “Even though we know that vasopressin increases social behaviors by acting within a number of brain regions, it is possible that the more global effects of the Avpr1a receptor are inhibitory. We don’t understand this system as well as we thought we did.”

Syrian hamsters are ideal test subjects for a number of research purposes, including those targeting social behavior, cancer, and even COVID-19. “Their stress response is more like that of humans than other rodents. They release the stress hormone cortisol, just as humans do. They also get many of the cancers that humans get,” said Professor Kim Huhman, Associate Director of the Neuroscience Institute at GSU. “Their susceptibility to the SARS-CoV-2 virus that causes COVID-19 makes them the rodent species of choice because they are vulnerable to it just as we are.”

GSU’s Neuroscience Institute and similar establishments intend to continue investigating the effects of suppressed or increased vasopressin in mammals. As the related body of research grows, so might treatments for depression and other mental illnesses.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/sxKAeZP

NEWS TECHNOLOGIE

Decades ago, Microsoft used its might to turn Internet Explorer into the de facto standard for web browsing. That got the company into hot water with regulators, but it was all for naught. Internet Explorer was on the decline for a long time before Microsoft moved to Edge. Now, it’s the end of the line for IE. Microsoft is following through with its plans to retire IE as announced last year, and the big day is just three weeks away on June 15th. 

As recently as the early 2000s, Microsoft’s browser held almost the entire market. Then came the likes of Firefox and Opera, which began eating into Microsoft’s lead. The tipping point came in 2009 at a time when people began expecting more from their desktop browsers, but Microsoft was hesitant to add modern features to IE. That’s the year Google released Chrome, which caused a precipitous drop in IE usage. Google was neck and neck with IE by mid-2012, and then it left Microsoft’s browser (and everyone else) in the dust. 

Microsoft began updating Internet Explorer more regularly, but the damage was done. It moved to the Edge browser in 2015 with the release of Windows 10, but even that didn’t reverse the company’s online fortunes. It recently scrapped the old Edge and moved to a version based on the same open source Chromium code as Google Chrome. 

Microsoft has been moving toward killing IE for several years, but its usage share is still around 0.38 percent, according to StatCounter. Edge, meanwhile, enjoys a four percent market share, and Chrome is around 64 percent. While IE’s user base is tiny, even a fraction of a percent of the entire internet-using population is still a lot of people. Microsoft is naturally urging these folks to upgrade to Edge in advance of June 15th, and it’s not taking “no” for an answer. 

The upcoming deadline isn’t just the end of support — Microsoft hasn’t updated Internet Explorer since 2020. This is when Microsoft will begin actively disabling Internet Explorer on Windows 10 systems. These computers will all have Edge pre-installed, so users will have another browser ready to go. In the days and weeks after June 15th, users who try to load Internet Explorer will find themselves redirected to Edge. Microsoft has recently clarified that it may also roll out a system update that streamlines the process. 

Anyone still on an older (and unsupported) version of Windows with Internet Explorer could technically continue using it, but this setup is an inadvisable security nightmare. It’s time for IE stragglers to get with the times, whether they want to or not. For those who need to access sites and services that inexplicably only play nicely with Microsoft’s old browser, Edge offers an IE tab mode

Now Read:



from ExtremeTechExtremeTech https://ift.tt/lHvOJdj

NEWS TECHNOLOGIE

(Photo: Intuition Robotics)
New York State is helping hundreds of older residents remain connected with loved ones by distributing robots to their homes. 

The program is being organized by the New York State Office for the Aging (NYSOFA) in partnership with Intuition Robotics, an Israeli tech startup. Intuition Robotics’ central product is ElliQ, a robot “sidekick” tasked with preventing loneliness among the elderly. The voice-activated robot doesn’t perform physical tasks, but rather attempts to keep older adults in touch with their families and communities while monitoring basic wellness goals. 

The NYSOFA will give out more than 800 ElliQ robots as part of its ongoing effort to “battle social isolation and support aging-in-place,” per the organization’s press release. Older adults—especially those who “age in place,” meaning they remain in their homes instead of going to a care home or assisted living community—have always been more at risk of isolation due to decreased mobility and the recent aging of the baby boomer generation. The first year or so of the COVID-19 pandemic exacerbated this problem, given widespread advisories to stay home and limit face-to-face interaction. And in the US, elderly adults are more likely to live alone than anywhere else in the world. 

ElliQ aims to mitigate this issue by gently reminding its human companions to call their loved ones—something they can do using ElliQ itself. The aesthetically-pleasing tabletop robot looks almost like a lamp and sits on a flat base, to which a speaker and a simple tablet are also attached. Older adults can use the tablet to conduct video calls with family, send text and photo messages, and participate in exercise programs. ElliQ can also present companions with the news, the weather, music, games, and other information or entertainment options. The robot uses daily check-ins and regular assessments to help companions track their mental and physical health, then share that health information with trusted loved ones with the companion’s consent. 

(Photo: Intuition Robotics)

“Despite misconceptions and generalizations, older adults embrace new technology, especially when they see it is designed by older adults to meet their needs,” said NYSOFA Director Greg Olsen in the release. “For those who experience some form of isolation and wish to age in place, ElliQ is a powerful complement to traditional forms of social interaction and support from professional or family caregivers.”

NYSOFA case managers will determine eligibility for the ElliQ distribution program using a few criteria, like age, Wi-Fi access, and ease of socialization with those outside their homes. Once recipients are identified, Intuition Robotics will meet with them to provide installation and training. 

“We’ve long believed that connecting older adults with local communities via ElliQ will add an important element in providing holistic support to older adults aging in place,” said Intuition Robotics co-founder and CEO Dor Skuler. “This partnership with NYSOFA helps us further that mission through an innovative initiative that we are incredibly proud to be part of.”

Now Read:



from ExtremeTechExtremeTech https://ift.tt/KxRiJvh

NEWS TECHNOLOGIE

Getting from point A to point B in the solar system is no simple feat, and inefficient, heavy rockets aren’t always the best way. Therefore, NASA has announced it is moving ahead with a new solar sail concept that could make future spacecraft more efficient and maneuverable. The Diffractive Solar Sailing project is now entering phase III development under the NASA Innovative Advanced Concepts (NIAC) program, which could eventually lead to probes that use solar radiation to coast over the sun’s polar regions. 

The concept of solar sails is an old one — they were first proposed in the 1980s. The gist is that you equip a vessel with a lightweight sail that translates the pressure from solar radiation into propulsion. The problem is that a solar sail has to be much larger than the spacecraft it’s dragging along. Even a low-thrust solar sail would need to be almost a square kilometer, and you need to keep it intact over the course of a mission. Plus, you have little choice but to fly in the direction of sunlight, so you have to make tradeoffs for either power or navigation. Futuristic diffractive light sails could address these shortcomings. 

This work is being undertaken at the Johns Hopkins University Applied Physics Laboratory under the leadership of Amber Dubill and co-investigator Grover Swartzlander. The project progressed through phase I and II trials, which had the team developing concept and feasibility studies on diffractive light sails. The phase III award ensures $2 million in funding over the next two years to design and test the materials that could make diffractive light propulsion a reality. 

A standard lightsail developed by the Planetary Society in 2019.

A diffractive light sail, as the name implies, takes advantage of a property of light known as diffraction. When light passes through a small opening, it spreads out on the other side. This could be used to make a light sail more maneuverable so it doesn’t need to go wherever the solar winds blow. 

The team will design its prototypes with several possible mission applications in mind. This technology most likely won’t have an impact on missions to the outer solar system where sunlight is weaker and the monumental distances require faster modes of transportation. However, heliophysics is a great use case for diffractive lightsailing as it would allow visiting the polar regions of the sun, which are difficult to access with current technology.

A lightsail with the ability to essentially redirect thrust from a continuous stream of sunlight would be able to enter orbit over the poles. It may even be possible to maneuver a constellation of satellites into this difficult orbit to study the sun from a new angle. In a few years, NASA may be able to conduct a demonstration mission. Until then, it’s all theoretical.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/TsipBWQ

NEWS TECHNOLOGIE

We’ve all been there: We’re in VR but we’re stuck in our living room and unable to take our metaverse experience to the local coffee shop. (Have we all been here? And are we certain we want to go? -Ed)

Zotac’s VR GO 4.0 backpack PC is hoping to change that. It’s a full-blown workstation-slash-mobile gaming PC in a backpack form factor. As its name implies, it’s the fourth generation of Zotac’s wearable PCs. It’s essentially a slim PC with a desktop GPU that you can strap to your back. It can sit on your desk like a regular computer, then be unplugged for on-the-go VR gaming. It even includes hot-swappable batteries and an RTX GPU to boot. The VR Headset is sold separately, of course.

What Zotac has done here is build a small form factor (SFF) PC, then attached it to a wearable harness. It’s also added the ability unplug it so it runs on battery power, though you won’t be spending hours in the metaverse. The 6000mAh battery is only rated for 50 minutes of playtime. Thankfully it comes with a second hot-swappable battery, but carrying around extra batteries is probably not enjoyable. You can buy as many extra batteries as you like though, but it’s unclear how much they cost. A similar battery for the 2.0 version costs $149.

Despite its form factor it seems like a pretty decent gaming PC, albeit with some unexpected components. For example it has an 11th Gen Tiger Lake Core i7-11800H CPU, which is a 45W mobile part. Why Zotac didn’t go for an Alder Lake part is a mystery. The GPU is weird though; it’s an Nvidia RTX a4500 “professional” Ampere card with 16GB of VRAM. We reached out to Zotac about why it chose this particular GPU, but didn’t hear back. Suffice to say it’s a very strange inclusion in a product marketed towards gamers. This is the kind of GPU you use to run professional compute applications, not Beat Saber. The company is also offering the same solution in a new SFF workstation. This may be a configuration the company settled on for several projects — the workstation has the same specs as the VR GO, but isn’t mounted in a harness.

The VR GO lets you easily add more memory and upgrade the M.2 SSD. (Image: Zotac)

Other specs include 16GB of DDR4 SO-DIMM memory, a 512GB M.2 SSD, Wi-Fi 6e and all the usual connectivity options. It allows expansion via USB and has HDMI 2.1 and DisplayPort 1.4 for desktop duty. Naturally, it also has RGB lights because you need that on a PC you wear on your back. It’s great for battery life, after all.

Overall, this is a weird product from Zotac. The marketing copy it included with it is PR word salad about breaking boundaries and envisioning new experiences. In other words, the same old metaverse bollocks we’ve heard before. It even includes bizarre mentions of “boosting data science model training”  and running engineering simulations. We’re kind of baffled by it, to be honest. It’s also hard for us to envision a scenario where we’d want to use a VR headset in an environment other than a spacious living room. Oh, and did we mention it weighs 11lbs? Due to its size Zotac includes a metal support frame and a support strap that goes around your waist. Suffice to say, you will know you are wearing this thing despite its reported “all day comfort.” Perhaps it’s a good thing the battery only lasts 50 minutes.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/PcWmKkB

الأربعاء، 25 مايو 2022

NEWS TECHNOLOGIE

“TV will rot your brain!” We’ve all heard that old canard. The idea that screen time makes kids dumber is a staple of the “Father, I cannot click the book” genre of boomer humor. But science is turning that narrative on its head. A newly published report using data from an ABCD Study indicates that screen time doesn’t rot kids’ brains after all. On the contrary: video games might actually make kids smarter.

The ABCD Study

The Adolescent Brain and Cognitive Development (ABCD) study is a gigantic longitudinal study of American child health and brain development.

The ABCD study timeline.

Participants begin at 9-10 years of age. Through its massive slate of tests and surveys, the project records a wide array of behavioral, biometric, and genetic information from participants and their parents. Then, project scientists follow up with participant families until the kids are 19-20 years old.

Data from the project is freely available for other researchers to use in their own studies. Once or twice a year, the ABCD Study releases another updated dataset. Now, researchers from Vrije Universiteit Amsterdam and the Karolinska Institutet of Sweden have used that data to find out what video gaming really does to childrens’ brains.

Video Games Can Make Kids Smarter

“For our study,” two of the new study’s authors explain in a joint statement, “we were specifically interested in the effect of screen time on intelligence – the ability to learn effectively, think rationally, understand complex ideas, and adapt to new situations.”

In particular, their model looked at how much time kids spent staring at glowing rectangles, and even how they used their screen time. For example, most of the kids in the study used their screen time in three different ways: watching videos (e.g. YouTube), socializing online, or playing video games. It compared gamers and non-gamers on tasks including reading comprehension, memory, visual-spatial processing, and executive function.

The researchers wanted to cover a wide variety of subdomains of intelligence. “However, intelligence is highly heritable in the populations we’ve studied so far,” study author Dr. Bruno Sauce explained to us over Zoom. Furthermore, “genetic and socioeconomic factors were our two major confounders.” So, to account for variations in these factors, the researchers rolled in genetic data and socioeconomic information from the participants’ parents.

After two years, results showed that screen time had no negative effect on cognitive skills. On the contrary: kids who had stuck with gaming showed a subtle but persistent improvement in their IQ.

Play is Practice

“Our results support the claim that screen time generally doesn’t impair children’s cognitive abilities,” said study coauthor Torkel Klingberg, from the Karolinska Institute in Sweden. Instead, “playing video games can actually help boost intelligence.”

Listen, I could have told you that. The ideas of spell slots, mana regen, and cooldown timers all contributed to my emotional literacy. Bullet hell games improved my reaction times and made me better at dodging skillshots. Learning to play different types of games — League of Legends, Oxygen Not Included, Satisfactory, Foundation — taught me a lot about learning the rules in any given system. And let me tell you something, League made me a lot more chill about losing, and better at taking criticism.

Image from the study. Original caption: Density plot of time spent Gaming (raw values) between boys and girls at ages 9–10.

Here’s the thing. Play is practice. For example: Learning to ride a bike improves your dexterity and timing. So does learning to play Super Mario Bros. You’re literally improving your DEX stat in real life. That skill then makes it easier to learn in other situations that require coordination, balance, and strength. Likewise, team sports encourage cooperation and build conflict resolution skills. First-person shooters and racing games can even improve a person’s reflexes. The effect is the same for boys and girls. Popular Mechanics reports that in tests, “participants playing a first-person shooting game were up to 50 percent better at identifying, locating, and tracking objects—skills that are also critical in real race driving—than nongamers.”

Games Can Rewire the Brain — For the Better

Even Geico agrees that skill-building driving games can make teens better drivers. However, not all games have the same beneficial influence. Treat the world like it’s GTA:V, and we become more inclined to break the law. Driving games help when they’re designed to build driving skills.

That’s how video games can make kids smarter in real life. Games use practice to build skills, and building intelligence requires a wide variety of skills. It’s all about learning. “Higher intelligence seems to mean you learn faster,” explained Dr. Sauce. “And higher intelligence in a given subdomain tends to correlate with higher intelligence in other subdomains.”

“The story may be more complicated,” Dr. Sauce told us. “There’s a lot we still don’t know about the plasticity of intelligence. But it matches parallel findings we’ve seen about deliberate practice.”

Through practice over time, these skills enmesh themselves into the brain. In addition to improving dexterity, games that build real-life skills can also buff your real-life INT stat. In short, the right games can rewire the brain, for the better.

Then Why Did Brain Training Games Flop?

Do you remember when brain training games were new? They promised everything from increased IQ to protection against Alzheimer’s disease. There was some science to back up the claims. For one thing, there’s a correlation between cognitive performance and overall brain health. It’s also true that testing out skills as we learn seems to result in better learning outcomes than simply re-reading. Furthermore, people with higher IQ scores often score higher in other tests of cognitive skills. But brain games have largely failed to deliver on their loftier promises.

One reason is that there’s simply more to brain health than memory recall, executive function, or any other single thing. It turns out getting good at sudoku isn’t the same thing as protecting the brain against aging-related diseases. Data from other studies suggest that staying active and engaged — using your mind and body — is highly important to overall brain health.

Crossword puzzles and brain training games can’t fix genetic diseases. But they do train cognitive skills. “It seems that cognitive training does have some benefits,” Dr. Sauce told us, “but those benefits are much narrower in scope than the hype cycle originally suggested. I think that romantic era is gone, and we’re much more skeptical about that — at least, I am.”

Instead, the rollerblading grannies might just have the right idea. To get sharp and stay healthy, it’s important to have fun and learn new things throughout life. It’s official. Science says: go play!

 

The research is published in Nature Scientific Reports. We’d like to thank Dr. Bruno Sauce, who graciously allowed us to pepper him with questions. Feature image by RebeccaPollard, CC BY SA 2.0.

Now Read:


from ExtremeTechExtremeTech https://ift.tt/OE2Zqbm