الجمعة، 31 ديسمبر 2021

NEWS TECHNOLOGIE

As the end of the year rolls around, it’s a time to reflect — in this case, on some of the games we played throughout the year. Most of us at ExtremeTech are gamers of one sort or another, and we’ve rounded up both what folks have played and what they liked. Note that games on these lists aren’t just titles that were published in 2021. Anything that caught a writer’s attention qualified for inclusion.

Joel Hruska

My own top games this year would be Hades, Orcs Must Die 3, and Dead Cells. Hades and Dead Cells are both roguelike games, while Orcs Must Die 3 is a hybrid tower defense / third-person combat game that tasks you with, well, killing a lot of orcs, trolls, and various other monsters.

Hades and Dead Cells are very different types of games, despite the fact that they both belong to the same genre. Hades is played from an isometric perspective and is relatively generous when it comes to invulnerability frames and dodge mechanics. It’s fairly friendly to brand-new players and the story advances each time you die and start another run. This helps ease the sting of losing, especially if you wind up stuck on a boss or level and need to run the dungeons more than once to move forward. Hades offers relatively few weapons, but the five weapons available all feel distinctive and the various unlockable “Aspects” change the way weapons work, necessitating a corresponding change in play style.

Dead Cells is a 2D platformer, not an isometric game like Hades. It’s also less forgiving about hit boxes and mid-run death.

Dead Cells has much less of a story than Hades and far more weapons. Players have the option to carry either two weapons and a pair of secondary skills or a weapon + shield. Where Hades focuses on building a small array of weapons with distinctive aspects that change how they function, Dead Cells has a huge variety of melee and ranged attack weapons. The learning curve can be steep if you aren’t much good at platformers, but there’s a great game underneath the difficulty curve.

Orcs Must Die 3 is the long-awaited sequel to OMD and OMD2, and it introduces a number of Warzone maps that feature larger numbers of enemies and new, higher-end traps and NPC allies to defend with. Reviews on OMD3 weren’t quite as good as they were for OMD2, but there’s still a lot of fun to be had — especially if you haven’t picked up OMD2 in a few years and wouldn’t mind some new maps to play with.

Honorary mention to Mass Effect: Legendary Edition. While it’s a remaster of old games, it’s one of the best ones I’ve played. The updates to Mass Effect make that title look a bit more like later games in the series and add a cover system. Mass Effect 2 and ME3 remain some of the best RPGs ever built. While even the remastered game shows its age in places, the Mass Effect trilogy remains one of the best science fiction series I’ve ever encountered, in any medium. Bioware’s world-building stands on par with what Star Trek, Stargate, and Star Wars have offered at their best.

Ryan Whitwam

In this age of disappointing, buggy game remasters, I was worried Mass Effect Legendary Edition would fail to live up to my expectations. Happily, my fears were unfounded. I loved these games the first time I played them — yes, even the third one, which featured a silly online component and a disjointed final cutscene. Going back in the Legendary Edition, I found the story and characters held up, and the tweaks to gameplay made all three games feel more modern.

I’m a sucker for a good sci-fi plot, and the story in Mass Effect stands out as particularly well-crafted. Too many games that try to tell epic stories have crummy dialog or flat voice acting, but all three chapters of Mass Effect nail the storytelling. And your choices actually matter. The devs didn’t make any major changes to the gameplay, which is great because even the original still plays well. The biotic and tech powers make the games a real delight to play (take my advice and don’t choose the boring old soldier class). Plus, the quality-of-life improvements like simpler leveling and no mandatory online component in ME3 make an impact. You can even skip that damn elevator ride on the Citadel in the first game.

Whether or not you played Mass Effect the first time around, the Legendary Edition is worth your money.

Jessica Hall

Some people are confident, skilled gamers — people who know what a “rogue-like” versus a “rogue-lite” is off the top of their heads, identify with the title “gamer,” and don’t struggle with minigames and puzzles. I am not one of these people. But over the past year, I’ve sunk a ton of time and attention into a handful of games, including Satisfactory, Oxygen Not Included, Rimworld, and Foundation.

Satisfactory is a seriously meta, seriously addictive factory/automation game about building stuff so you can build more stuff in order to build more stuff. If you aren’t done enough with work when you get out of work, come home and get to work in your factory. Resource nodes are bottomless and nobody has ever heard of “carbon dioxide” or “pollution management.” Build increasingly complex factory lines, send increasingly complex stuff up the Space Elevator, and learn to hate spitter doggos but love shrimpdogs. Satisfactory is either ideal or a unique personal hell for those who need things to be symmetric. “Spaghetti” and “linguine” become terms of disdain.

While the game is still in early access, it is robustly mod-compatible and has a huge Steam Workshop mod library. The recent Update 5 rolled many of the most popular mod features into the game’s vanilla code, including signs, road markings and other decals, a universal grid, and even better clipping rules. Better still, there’s now a native mod manager.

Oxygen Not Included sure does mean what it says on the tin. Crashlanded on a hostile chunk of rock in the middle of the big empty, colonists (duplicants, in ONI parlance) have to dig up the raw materials they’ll use to create a home for themselves. Whether the colonist group chooses to stay on their asteroid or reach for the stars, the game features a realistic, intricate and unforgiving resource management simulator that goes right down to kiloDaltons and Kelvins. Thankfully, there’s a ton of tutorials baked right in. If math had been more like this, I’d have done my homework.

Rimworld is an open-ended colony-building RPG — much more like D&D than Final Fantasy, in that instead of following a plot already laid out, you choose characters and set them loose into the procedurally-generated world to help them write their own story. Like ONI, Rimworld features a small but growing group of colonists, scratching out a precarious foothold on an unknown rimworld. Instead of asteroids, though, these rimworlds are planets — inhabited by other factions who may be less than welcoming to newcomers. Hostile factions can launch raids, and there are other, more sinister late-game temptations and hazards. Like ONI, the game’s higher difficulty settings are unforgiving.

Foundation is somewhat like a medieval SimCity. To prosper in the long haul, the player must skillfully employ the map tile’s limited resources to create abundance for the citizens of their new nation. Residents need food of increasing quality, adequate housing, and sufficient access to houses of worship. Like ONI and Rimworld, it’s a meditative, easily modded ant farm that you get to design and shepherd along toward win conditions you define.

Honorable Mention: No Man’s Sky and Phasmophobia

There should be an award for games that started out underwhelming and got really good as the developers kept working on them over time. No Man’s Sky should be on that list. In the beginning, it was all pretty wonky and very obviously procedurally generated, and you could cheese the system pretty hard for infinite money, infinite chromatic metal and indium, elite ships, et cetera. But after years of work and listening to feedback, NMS is a polished, engaging game, full of new missions and craftables and characters.

The same goes for Phasmophobia. Phasmo is still in early access, but it’s gone from a glorified demo to a ghost-hunting game so genuinely creepy I mostly won’t play it alone. It started out with just a handful of ghosts and evidence types, with a ghost behavior AI that you could meme on by looping the ghost around the kitchen counter a few times. Over time, in close cooperation with streaming playtesters like Insym, the game has blossomed into something spookier and frankly more interesting. Ghostly behavior is much improved, quests and evidence types are more varied, and it’s a whole lot harder to be certain you’re safe. Ghosts can hear players now, and they’ll use noise and EM feedback to hunt down an unwary player with extreme prejudice. If you like whodunits and games that will spike your adrenaline with a sense of urgency and dread, Phasmophobia is a great pick.

Josh Norem

I’m the type of gamer who only plays AAA FPS games on my gaming PC, no indie or console stuff for me, thank you very much. So as you can guess, 2021 was mostly a year of heartbreak for me. I spent most of my time this year enveloped in Cyberpunk 2077, Far Cry 6, and Battlefield 2042, all of which had their issues but were still relatively enjoyable, with one exception.

Yes, I know this game came out at this time last year, but I didn’t get into it much until 2021 and while there’s bugs, and a whole lot of jank, on the PC I didn’t see very much of it through multiple playthroughs. The game looked absolutely amazing on my 34″ 120HZ panel, so much so that I ended up buying a pre-built PC with an RTX 3080 just for Cyberpunk. Once my new GPU was installed, I was able to run the game at max details at 1440p, with full ray tracing enabled on everything, and it left me speechless. If you have not seen Cyberpunk with RT effects, you are missing out. Besides the graphics I really got involved with the story, and a few of the game’s five endings really hit me right in the feels, especially the absolute worst one, if you know what I’m talking about.

As a huge Far Cry fan I thought this game would be pretty amazing. The chance to go to “Cuba” and fight for the revolution against Giancarlo Esposito with a pet crocodile should have been a slam dunk, but sadly Ubisoft made a lot of really questionable design decisions this time around. I did end up having fun throughout my time in Yara, and the wheelchair-bound Dachshund Chorizo was a pleasant surprise, but the game was way too easy, and Ubisoft didn’t offer any way to adjust the game’s difficulty either, which is a bizarre choice.  Overall, it was a Far Cry game, that’s for sure, but not in the same league as 3/4/5.

Battlefield 2042 – There’s not much to say about this other than it was a massive disappointment. Like many fans of the “old” Battlefield titles like 2, 3, and 4, this was supposed to be a return to the series’ origins instead of what we got with 1 and 5, but instead DICE/EA changed the series entirely into a “hero shooter” that just plain sucks. The maps are horrible, the combat is frustrating, you have to walk too much, and there too many bugs to even begin to list them all. I’m still holding out hope they will turn things around like they did with 4, but as each day passes I’m getting less confident that will ever happen. If anything they’ll probably start adding NFTs and new skins to the game before they bring back the scoreboard and the previous titles’ class structure.

Annie Cardinal (Writing on Behalf of David, Who Apparently Does Not Play New Games)

As I started to write up my favorite games of 2021, I realized that they were mostly the same as the ones I played and wrote about in our roundup last year. For a fresh perspective, I turned to our adult daughter, Annie, who is much more adventurous about exploring new titles. We’ve played several of them as a family, but she definitely has the most-well-formed opinion on them. Her recommended options include Townscaper, Ori and the Blind Forest, Super Mario Odyssey, and Gorogoa.

In an increasingly chaotic and unstable world, I was in search of a meditative game where I could escape from everyday life and transport myself to a worry-free place. Townscaper is an adorable and whimsical indie city builder with no purpose or end goal. The game starts as an endless ocean with a randomly generated grid pattern. Click on the water, and a small building appears with a soothing drip like a pebble falling into a pond. Click again, and it morphs seamlessly into a tower or archway. Build canals, courtyards, stairways, and bridges by clicking to add or remove buildings. Sometimes birds will land on the rooftops or colored bunting appears across a narrow street. It’s nearly impossible to create anything that looks unappealing. Make a little seaside city inspired by the Italian coast, or a grand palace and grounds on a mountainside – the choice is yours. But I challenge you not to think too much. It’s better that way.

Ori and the Blind Forest

Ori Screenshot

As someone who did not play many video games growing up, I didn’t develop the familiarity with controllers required to play platformer games comfortably. I saw a playthrough of Hollow Knight and fell in love with its cute characters. However, Hollow Knight is incredibly difficult for beginners, so I found Ori and the Blind Forest which has similar gameplay but is slightly less impossible to play. Ori, a sort of cross between a rabbit and a fox, is the sole beacon of light in a dark world as he whirls and spins and leaps over spikes and bugs and creatures that have poisoned the forest, attacking them with pulses of light. The world is immersive and beautiful, and I finally got over my fear of dying in a video game and am growing more comfortable with button combinations. After over 300 deaths at only a third of the way through, my resilience has improved enough that I only sometimes want to throw the controller at the screen, which is excellent progress for me. (The cost of a new OLED has saved my own LG CX on several occasions) – Ed.

Super Mario Odyssey

Super Mario Odyssey Screenshot

I had played some recent Mario platformers, including Super Mario Bros Wii U and Super Mario 3D World. But Odyssey takes the cake by throwing Mario into a set of single player 3D worlds. With the help of his animated hat, Cappy, Mario can attack and leap through worlds saving them from Bowser’s crew and recovering the stolen artifacts on the way to Princess Peach’s forced wedding. Mario’s balloon ship, the Odyssey, takes him to gravity-defying worlds like a desert populated by adorable sombrero skeletons and upside-down pyramids to a neon-colored food world run by forks in chef’s hats. Cappy’s biggest power is its ability to take control of enemies in the world and use their powers to solve the puzzles and beat enemies. Ride a stone jaguar to mow down cacti and collect coins. Make a stack of goombas to reach a tall platform. Taking over the T. rex in the explorer-themed world is especially satisfying, as you stomp and crash through blocks and enemies. It was the first mainstream console-based video game I completed all by myself. Odyssey provided a virtual escape from lockdown and transported me to immersive landscapes with quirky characters and epic music at a time when I needed it most.

Gorogoa

Gorogoa Screenshot

This innovative puzzle game is short but truly shows that there is so much untapped potential in video game mechanics. You are presented with a two-by-two grid with four tiles. Each tile is hand drawn art – sort of a cross between the trippy animation in The Beatles’ Yellow Submarine film and Sir John Tenniel’s original sketches in Alice and Wonderland. Click on areas in the tiles or slide them on top of each other to open up doorways using 2D M. C. Escher perspective. Follow a young boy during a bomb blitz as he uncovers the mystery of an elusive flying monster. There are no words required. Only the visuals are needed to tell this gorgeous story that will stretch your mind and understanding of reality.

Unravel Two

Unravel Screenshot

Unravel Two is a cooperative platformer game where two yarn creatures, joined together by a string, work together to explore the memories and treacherous world of a few children in an adoption home. The two Yarnys jump and swing through the forest, an abandoned warehouse, and a lighthouse on an island in search of sparks that will help the children through their troubles. I loved playing this with my husband, as each of us had strengths and weaknesses in the game and could pull each other along while both contributing. I was better at solving the puzzles and figuring out timing problems, and he could more quickly navigate or jump through challenging areas when danger was following close behind. The game is both peaceful and stressful, and thankfully creates frequent save points so minimal progress is lost every time you inevitably jump into a flame or get eaten by a fish.

Adrianna Nine

I’ve always gone for light-hearted games—the world is stressful enough as-is. I’m also all for supporting small businesses, which means I often find myself chasing down games by indie devs to play on my Nintendo Switch or on my partner’s various couch consoles. My favorite indie finds of 2021 have been Going Under, a dungeon crawler that hilariously mocks Silicon Valley startup culture, and SkateBIRD, which is extremely similar to Tony Hawk’s series but far more charming (despite slightly clunky controls).

Planet Coaster isn’t by a small studio, but it’s worth mentioning, as I went through a nearly three-month stint of losing myself in virtual theme parks on a regular basis. I’ve always been a sucker for simulation games, and my very first computer game love affair was with the original Roller Coaster Tycoon, so this surprised absolutely nobody within my social circle.

Animal Crossing: New Horizons has also reclaimed its spot as a near-nightly go-to, now that the update (imagine a sparkle emoji here) is out.

A Happy New Year to you all.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3eEJXo5

الأربعاء، 29 ديسمبر 2021

NEWS TECHNOLOGIE

Inside one of Riot Games’ offices. (Photo: Hughes Marino)
Current and former Riot Games employees have found victory in a $100 million payout soon to be made by the video game publisher. In settling a three-year discrimination class action lawsuit, Riot has agreed to issue a total of $80 million to the settlement class and cover $20 million in legal fees.

Employees’ legal battle with Riot began back in 2018, when one then-current and one former employee initiated a class action lawsuit on the basis of unequal pay, discrimination in hiring practices, and sexual harassment. According to a Kotaku report released around the same time, the employees alleged that Riot routinely came up with excuses not to hire women into leadership roles and preferred ideas suggested by male employees, even after female employees had already presented those same ideas. One of the two employees had been told several times by her own boss that her “cute” appearance contributed to her ability to obtain her role; another manager consistently told this employee that her husband and children must have really missed her while she was at work. 

As it turns out, the harassment and discrimination didn’t stop with those two initial employees. Kotaku’s investigative reporting found that many others had received unsolicited nude photos from bosses, on top of having been asked sexually explicit questions by colleagues. At one point, senior leaders circulated a list of female employees they’d be interested in sleeping with. The culture very much made working at Riot feel like “working at a giant fraternity,” as one female source put it.

Riot Games’ Hollywood office. (Photo: Hughes Marino)

In an effort to settle back in 2019, Riot originally offered qualifying female employees (both current and former) a $10 million payout. The California Department of Fair Employment and Housing (DFEH) and Division of Labor Standards Enforcement (DLSE) rejected Riot’s proposal, saying those who qualified could receive up to $400 million combined for enduring the alleged offenses.

The DFEH has endorsed the latest settlement offer, which provides a total of $80 million to 1,065 female Riot employees and 1,300 female contractors. The other $20 million will go toward covering legal expenses incurred since the start of the lawsuit. The company has also agreed to systemically reform its company culture in ways that include “independent expert analysis of Riot’s pay, hiring, and promotion practices, and independent monitoring of sexual harassment and retaliation at Riot’s California offices for three years.”

The publisher’s settlement comes just months after competitor Activision Blizzard found itself under fire for similarly egregious harassment and discrimination claims. “While we’re proud of how far we’ve come since 2018, we must also take responsibility for the past,” Riot said in a statement. “We hope that this settlement properly acknowledges those who had negative experiences at Riot and demonstrates our desire to lead by example in bringing more accountability and equality to the games industry.”

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3JswA8D

NEWS TECHNOLOGIE

Update (12/29/2021): It’s the end of the year, so we’re surfacing a few old favorites from earlier in 2021. The “M2” referred to below refers to the Apple SoCs that eventually shipped as the M1 Pro and M1 Max.

Original story below:

With Apple’s WWDC coming up soon, we’re expecting to hear more about the company’s updated, ARM-based MacBook Pro laptops. Rumors point to Apple launching a slate of upgraded systems, this time based around its “M2” CPU, a scaled-up version of the M1 core that debuted last year. The M2 could reportedly field eight high-performance cores and two high-efficiency cores, up from a 4+4 configuration in the existing M1.

With the launch of the ARM-based M1 came a raft of x86-versus-ARM comparisons and online discussions comparing and contrasting the new architectures. In these threads, you’ll often see authors bring up two additional acronyms: CISC and RISC. The linkage between “ARM versus x86” and “CISC versus RISC” is so strong, every single story on the first page of Google results defines the first with reference to the second.

This association mistakenly suggests that “x86 versus ARM” can be classified neatly into “CISC versus RISC,” with x86 being CISC and ARM being RISC. Thirty years ago, this was true. It’s not true today. The battle over how to compare x86 CPUs to processors built by other companies isn’t a new one. It only feels new today because x86 hasn’t had a meaningful architectural rival for nearly two decades. ARM may prominently identify itself as a RISC CPU company, but today these terms conceal as much as they clarify regarding the modern state of x86 and ARM CPUs.

Image by David Bauer, CC BY-SA 2.0

A Simplified History of the Parts People Agree On

RISC is a term coined by David Patterson and David Ditzel in their 1981 seminal paper “The Case for a Reduced Instruction Set Computer.” The two men proposed a new approach to semiconductor design based on observed trends in the late 1970s and the scaling problems encountered by then-current CPUs. They offered the term “CISC” — Complex Instruction Set Computer — to describe many of the various CPU architectures already in existence that did not follow the tenets of RISC.

This perceived need for a new approach to CPU design came about as the bottlenecks limiting CPU performance changed. So-called CISC designs, including the original 8086, were designed to deal with the high cost of memory by moving complexity into hardware. They emphasized code density and some instructions performed multiple operations in sequence on a variable. As a design philosophy, CISC attempted to improve performance by minimizing the number of instructions a CPU had to execute in order to perform a given task. CISC instruction set architectures typically offered a wide range of specialized instructions.

By the late 1970s, CISC CPUs had a number of drawbacks. They often had to be implemented across multiple chips, because the VLSI (Very Large Scale Integration) techniques of the time period couldn’t pack all the necessary components into a single package. Implementing complicated instruction set architectures, with support for a large number of rarely used instructions, consumed die space and lowered maximum achievable clock speeds. Meanwhile, the cost of memory was steadily decreasing, making an emphasis on code size less important.

Patterson and Ditzel argued that CISC CPUs were still attempting to solve code bloat problems that had never quite materialized. They proposed a fundamentally different approach to processor design. Realizing that the vast majority of CISC instructions went unused (think of this as an application of the Pareto principle, or 80/20 rule), the authors proposed a much smaller set of fixed-length instructions, all of which would complete in a single clock cycle. While this would result in a RISC CPU performing less work per instruction than its CISC counterpart, chip designers would compensate for this by simplifying their processors.

This simplification would allow transistor budgets to be spent on other features like additional registers.  Contemplated future features in 1981 included “on-chip caches, larger and faster transistors, or even pipelining.” The goal for RISC CPUs was to execute as close to one IPC (instruction per clock cycle, a measure of CPU efficiency) as possible, as quickly as possible. Reallocate resources in this fashion, the authors argued, and the end result would outperform any comparative CISC design.

It didn’t take long for these design principles to prove their worth. The R2000, introduced by MIPS in 1985, was capable of sustaining an IPC close to 1 in certain circumstances. Early RISC CPU families like SPARC and HP’s PA-RISC family also set performance records. During the late 1980s and early 1990s, it was common to hear people say that CISC-based architectures like x86 were the past, and perhaps good enough for home computing, but if you wanted to work with a real CPU, you bought a RISC chip. Data centers, workstations, and HPC is where RISC CPUs were most successful, as illustrated below:

This Intel image is useful but needs a bit of context. “Intel Architecture” appears to refer only to x86 CPUs — not chips like the 8080, which was popular in the early computer market. Similarly, Intel had a number of supercomputers in the “RISC” category in 2000 — it was x86 machines that gained market share, specifically.

Consider what this image says about the state of the CPU market in 1990. By 1990, x86 had confined non-x86 CPUs to just 20 percent of the personal computer market, but it had virtually no x86 share in data centers and none in HPC. When Apple wanted to bet on a next-generation CPU design, it chose to bet on PowerPC in 1991 because it believed high-performance CPUs built along RISC principles were the future of computing.

Agreement on the mutual history of CISC versus RISC stops in the early 1990s. The fact that Intel’s x86 architecture went on to dominate the computing industry across PCs, data centers, and high-performance computing (HPC) is undisputed. What’s disputed is whether Intel and AMD accomplished this by adopting certain principles of RISC design or if their claims to have done so were lies.

Divergent Views

One of the reasons why terms like RISC and CISC are poorly understood is because of a long-standing disagreement regarding the meaning and nature of certain CPU developments. A pair of quotes will illustrate the problem:

First, here’s Paul DeMone from RealWorldTech, in “RISC vs. CISC Still Matters:”

The campaign to obfuscate the clear distinction between RISC and CISC moved into high gear with the advent of the modern x86 processor implementations employing fixed length control words to operate out-of-order execution data paths… The “RISC and CISC are converging” viewpoint is a fundamentally flawed concept that goes back to the i486 launch in 1992 and is rooted in the widespread ignorance of the difference between instruction set architectures and details of physical processor implementation.

In contrast, here’s Jon “Hannibal” Stokes in “RISC vs. CISC: the Post-RISC Era:”

By now, it should be apparent that the acronyms “RISC” and “CISC” belie the fact that both design philosophies deal with much more than just the simplicity or complexity of an instruction set… In light of what we now know about the the historical development of RISC and CISC, and the problems that each approach tried to solve, it should now be apparent that both terms are equally nonsensical… Whatever “RISC vs. CISC” debate that once went on has long been over, and what must now follow is a more nuanced and far more interesting discussion that takes each platform–hardware and software, ISA and implementation–on its own merits.

Neither of these articles is new. Stokes’ article was written in 1999, DeMone’s in 2000. I’ve quoted from them both to demonstrate that the question of whether the RISC versus CISC distinction is relevant to modern computing is literally more than 20 years old. Jon Stokes is a former co-worker of mine and more than expert enough to not fall into the “ignorance” trap DeMone references.

Implementation vs. ISA

The two quotes above capture two different views of what it means to talk about “CISC versus RISC.” DeMone’s view is broadly similar to ARM or Apple’s view today. Call this the ISA-centric position.

Stokes’ viewpoint is what has generally dominated thinking in the PC press for the past few decades. We’ll call this the implementation-centric position. I’m using the word “implementation” because it can contextually refer to both a CPU’s microarchitecture or the process node used to manufacture the physical chip. Both of these elements are relevant to our discussion. The two positions are described as “centric,” because there’s overlap between them. Both authors acknowledge and agree on many trends, even if they reach different conclusions.

According to the ISA-centric position, there are certain innate characteristics of RISC instruction sets that make these architectures more efficient than their x86 cousins, including the use of fixed-length instructions and a load/store design. While some of the original differences between CISC and RISC are no longer meaningful, the ISA-centric view believes the remaining differences are still determinative, as far as performance and power efficiency between x86 and ARM are concerned, provided an apples-to-apples comparison.

This ISA-centric perspective holds that Intel, AMD, and x86 won out over MIPS, SPARC, and POWER/PowerPC for three reasons: Intel’s superior process manufacturing, the gradual reduction in the so-called “CISC tax” over time that Intel’s superior manufacturing enabled, and that binary compatibility made x86 more valuable as its install base grew whether or not it was the best ISA.

The implementation-centric viewpoint looks to the ways modern CPUs have evolved since terms like RISC and CISC were invented and argues that we’re working with an utterly outdated pair of categories.

Here’s an example. Today, both x86 and high-end ARM CPUs use out-of-order execution to improve CPU performance. Using silicon to re-order instructions on the fly for better execution efficiency is entirely at odds with the original design philosophy of RISC. Patterson and Ditzel advocated for a less complicated CPU capable of running at higher clock speeds. Other common features of modern ARM CPUs, like SIMD execution units and branch prediction, also didn’t exist in 1981. The original goal of RISC was for all instructions to execute in a single cycle, and most ARM instructions conform to this rule, but the ARMv8 and ARMv9 ISAs contain instructions that take more than one clock cycle to execute. So do modern x86 CPUs.

The implementation-centric view argues that a combination of process node improvements and microarchitectural enhancements allowed x86 to close the gap with RISC CPUs long ago and that ISA-level differences are irrelevant above very low power envelopes. This is the point of view backed by a 2014 study on ISA efficiency that I have written about in the past. It’s a point of view generally backed by Intel and AMD, and it’s one I’ve argued for.

But is it wrong?

Did RISC and CISC Development Converge?

The implementation-centric view is that CISC and RISC CPUs have evolved towards each other for decades, beginning with the adoption of new “RISC-like” decoding methods for x86 CPUs in the mid-1990s.

The common explanation goes like this: In the early 1990s, Intel and other x86 CPU manufacturers realized that improving CPU performance in the future would require more than larger caches or faster clocks. Multiple companies decided to invest in x86 CPU microarchitectures that would reorder their own instruction streams on the fly to improve performance. As part of that process, native x86 instructions were fed into an x86 decoder and translated to “RISC-like” micro-ops before being executed.

This has been the conventional wisdom for over two decades now, but it’s been challenged again recently. In a story posted to Medium back in 2020, Erik Engheim wrote: “There are no RISC internals in x86 chips. That is just a marketing ploy.” He points to both DeMone’s story and a quote by Bob Colwell, the chief architect behind the P6 microarchitecture.

The P6 microarchitecture was the first Intel microarchitecture to implement out-of-order execution and a native x86-to-micro-op decode engine. P6 was shipped as the Pentium Pro and it evolved into the Pentium II, Pentium 3, and beyond. It’s the grandfather of modern x86 CPUs. If anyone ought to know the answer to this question, it would be Colwell, so here’s what he had to say:

Intel’s x86’s do NOT have a RISC engine “under the hood.” They implement the x86 instruction set architecture via a decode/execution scheme relying on mapping the x86 instructions into machine operations, or sequences of machine operations for complex instructions, and those operations then find their way through the microarchitecture, obeying various rules about data dependencies and ultimately time-sequencing.

The “micro-ops” that perform this feat are over 100 bits wide, carry all sorts of odd information, cannot be directly generated by a compiler, are not necessarily single cycle. But most of all, they are a microarchitecture artifice — RISC/CISC is about the instruction set architecture… The micro-op idea was not “RISC-inspired”, “RISC-like”, or related to RISC at all. It was our design team finding a way to break the complexity of a very elaborate instruction set away from the microarchitecture opportunities and constraints present in a competitive microprocessor.

Case closed! Right?

Not exactly. (Click above for an approximation of how I feel when even appearing to contradict Bob Colwell)

Intel wasn’t the first x86 CPU manufacturer to combine an x86 front-end decoder with what was claimed to be a “RISC-style” back-end. NexGen, later acquired by AMD, was. The NexGen 5×86 CPU debuted in March 1994, while the Pentium Pro wouldn’t launch until November 1995. Here’s how NexGen described its CPU: “The Nx586 processor is the first implementation of NexGen’s innovative and patented RISC86 microarchitecture.” (Emphasis added). Later, the company gives some additional detail: “The innovative RISC86 approach dynamically translates x86 instructions into RISC86 instructions. As shown in the figure below, the Nx586 takes advantage of RISC performance principles. Due to the RISC86 environment, each execution unit is smaller and more compact.”

It could still be argued that this is marketing speak and nothing more, so let’s step ahead to 1996 and the AMD K5. The K5 is typically described as an x86 front-end married to an execution backend AMD borrowed from its 32-bit RISC micro-controller, the Am29000. Before we check out its block diagram, I want to compare it against the original Intel Pentium. The Pentium is arguably the pinnacle of CISC x86 evolution, given that it implements both pipelining and superscaling in an x86 CPU, but does not translate x86 instructions into micro-ops and lacks an out-of-order execution engine.


Now, compare the Pentium against the AMD K5.

If you’ve spent any time looking at microprocessor block diagrams, the K5 should look familiar in a way that the Pentium doesn’t. AMD bought NexGen after the launch of the Nx586. The K5 was a homegrown AMD design, but K6 was originally a NexGen product. From this point forward, CPUs start looking more like the chips we’re familiar with today. And according to the engineers that designed these chips, the similarities ran more than skin deep.

David Christie of AMD published an article in IEEE Micro on the K5 back in 1996 that speaks to how it hybridized RISC and CISC:

We developed a micro-ISA based loosely on the 29000’s instruction set. Several additional control fields expanded the microinstruction size to 59 bits. Some of these simplify and speed up the superscalar control logic. Others provide x86-specific functionality that is too performance critical to synthesize with sequences of micro instructions. But these micro instructions still adhere to basic RISC principles: simple register-to register operations with fixed-position encoding of register specifiers and other fields, and no more than one memory reference per operation. For this reason we call them RISC operations, or ROPs for short (pronounced R-ops). Their simple, general-purpose nature gives us a great deal of flexibility in implementing the more complex x86 operations, helping to keep the execution logic relatively simple.

The most important aspect of the RISC microarchitecture, however, is that the complexity of the x86 instruction set stops at the decoder and is largely transparent to the out-of-order execution core. This approach requires very little extra control complexity beyond that needed for speculative out-of-order RISC execution to achieve speculative out-of-order x86 execution. The ROP sequence for a task switch looks no more complicated than that for a string of simple instructions. The complexity of the execution core is effectively isolated from the complexity of the architecture, rather than compounded by it.

Christie is not confusing the difference between an ISA and the details of a CPU’s physical implementation. He’s arguing that the physical implementation is itself “RISC-like” in significant and important ways.

The K5 re-used parts of the execution back-end AMD developed for its Am29000 family of RISC CPUs, and it implements an internal instruction set that is more RISC-like than the native x86 ISA. The RISC-style techniques NexGen and AMD refer to during this period reference concepts like data caches, pipelining, and superscalar architectures. Two of these — caches and pipelining — are named in Patterson’s paper. None of these ideas are strictly RISC, but they all debuted in RISC CPUs first, and they were advantages associated with RISC CPUs when K5 was new. Marketing these capabilities as “RISC-like” made sense for the same reason it made sense for OEMs of the era to describe their PCs as “IBM-compatible.”

The degree to which these features are RISC and the answer to whether x86 CPUs decode RISC-style instructions depends on the criteria you choose to frame the question. The argument is larger than the Pentium Pro, even if P6 is the microarchitecture most associated with the evolution of techniques like an out-of-order execution engine. Different engineers at different companies had their own viewpoints.

How Encumbered Are x86 CPUs in the Modern Era?

The past is never dead. It’s not even past. — William Faulker

It’s time to pull this discussion into the modern era and consider what the implications of this “RISC versus CISC” comparison are for the ARM and x86 CPUs actually shipping today. The question we’re really asking when we compare AMD and Intel CPUs with Apple’s M1 and future M2 is whether there are historical x86 bottlenecks that will prevent x86 from competing effectively with Apple and future ARM chips from companies such as Qualcomm?

According to AMD and Intel: No. According to ARM: Yes. Since all of the companies in question have obvious conflicts of interest, I asked Agner Fog instead.

Agner Fog is a Danish evolutionary anthropologist and computer scientist, known for the extensive resources he maintains on the x86 architecture. His microarchitectural manuals are practically required reading if you want to understand the low-level behavior of various Intel and AMD CPUs:

ISA is not irrelevant. The x86 ISA is very complicated due to a long history of small incremental changes and patches to add more features to an ISA that really had no room for such new features…

The complicated x86 ISA makes decoding a bottleneck. An x86 instruction can have any length from 1 to 15 bytes, and it is quite complicated to calculate the length. And you need to know the length of one instruction before you can begin to decode the next one. This is certainly a problem if you want to decode 4 or 6 instructions per clock cycle! Both Intel and AMD now keep adding bigger micro-op caches to overcome this bottleneck. ARM has fixed-size instructions so this bottleneck doesn’t exist and there is no need for a micro-op cache.

Another problem with x86 is that it needs a long pipeline to deal with the complexity. The branch misprediction penalty is equal to the length of the pipeline. So they are adding ever-more complicated branch prediction mechanisms with large branch history tables and branch target buffers. All this, of course, requires more silicon space and more power consumption.

The x86 ISA is quite successful despite of these burdens. This is because it can do more work per instruction. For example, A RISC ISA with 32-bit instructions cannot load a memory operand in one instruction if it needs 32 bits just for the memory address.

In his microarchitectural manual, Agner also writes that more recent trends in AMD and Intel CPU designs have hearkened back to CISC principles to make better use of limited code caches, increase pipeline bandwidth, and reduce power consumption by keeping fewer micro-ops in the pipeline. These improvements represent microarchitectural offsets that have improved overall x86 performance and power efficiency.

And here, at last, we arrive at the heart of the question: Just how heavy a penalty do modern AMD and Intel CPUs pay for x86 compatibility?

The decode bottleneck, branch prediction, and pipeline complexities that Agner refers to above are part of the “CISC tax” that ARM argues x86 incurs. In the past, Intel and AMD have told us decode power is a single-digit percentage of total chip power consumption. But that doesn’t mean much if a CPU is burning power for a micro-op cache or complex branch predictor to compensate for the lack of decode bandwidth. Micro-op cache power consumption and branch prediction power consumption are both determined by the CPU’s microarchitecture and its manufacturing process node. “RISC versus CISC” does not adequately capture the complexity of the relationship between these three variables.

It’s going to take a few years before we know if Apple’s M1 and future CPUs from Qualcomm represent a sea change in the market or the next challenge AMD and Intel will rise to. Whether maintaining x86 compatibility is a burden for modern CPUs is both a new question and a very old one. New, because until the M1 launched, there was no meaningful comparison to be made. Old, because this topic used to get quite a bit of discussion back when there were non-x86 CPUs still being used in personal computers.

AMD continues to improve Zen by 1.15x – 1.2x per year. We know Intel’s Alder Lake will also use low-power x86 CPU cores to improve idle power consumption. Both x86 manufacturers continue to evolve their approaches to performance. It will take time to see how these cores, and their successors, map against future Apple products — but x86 is not out of this fight.

Why RISC vs. CISC Is the Wrong Way to Compare x86, ARM CPUs

When Patterson and Ditzel coined RISC and CISC they intended to clarify two different strategies for CPU design. Forty years on, the terms obscure as much as they clarify. RISC and CISC are not meaningless, but the meaning and applicability of both terms have become highly contextual.

Boiling the entire history of CPU development down to CISC versus RISC is like claiming these two books contain the sum of all human knowledge. Only VLIW kids will get this post.

The problem with using RISC versus CISC as a lens for comparing modern x86 versus ARM CPUs is that it takes three specific attributes that matter to the x86 versus ARM comparison — process node, microarchitecture, and ISA —  crushes them down to one, and then declares ARM superior on the basis of ISA alone. “ISA-centric” versus “implementation-centric” is a better way of understanding the topic, provided one remembers that there’s a Venn diagram of agreed-upon important factors between the two. Specifically:

The ISA-centric argument acknowledges that manufacturing geometry and microarchitecture are important and were historically responsible for x86’s dominance of the PC, server, and HPC market. This view holds that when the advantages of manufacturing prowess and install base are controlled for or nullified, RISC — and by extension, ARM CPUs — will typically prove superior to x86 CPUs.

The implementation-centric argument acknowledges that ISA can and does matter, but that historically, microarchitecture and process geometry have mattered more. Intel is still recovering from some of the worst delays in the company’s history. AMD is still working to improve Ryzen, especially in mobile. Historically, both x86 manufacturers have demonstrated an ability to compete effectively against RISC CPU manufacturers.

Given the reality of CPU design cycles, it’s going to be a few years before we really have an answer as to which argument is superior. One difference between the semiconductor market of today and the market of 20 years ago is that TSMC is a much stronger foundry competitor than most of the RISC manufacturers Intel faced in the late 1990s and early 2000s. Intel’s 7nm team has got to be under tremendous pressure to deliver on that node.

Nothing in this story should be read to imply that an ARM CPU can’t be faster and more efficient than an x86 CPU. The M1 and the CPUs that will follow from Apple and Qualcomm represent the most potent competitive threat x86 has faced in the past 20 years. The ISA-centric viewpoint could prove true. But RISC versus CISC is a starting point for understanding the historical difference between two different types of CPU families, not the final word on how they compare today.

This argument is clearly going nowhere. Fights that kicked off when Cheers was the hottest thing on television tend to have a lot of staying power. But understanding its history hopefully helps explain why it’s a flawed lens for comparing CPUs in the modern era.

Note: I disagree with Engheim on the idea that the various RISC-like claims made by x86 manufacturers constitute a marketing ploy, but he’s written some excellent stories on various aspects of programming and CPU design. I recommend his work for more details on these topics.

Feature image by Intel.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3fChG2Q

NEWS TECHNOLOGIE

(Image: Ithome)
We love SSDs for so many reasons: they’re silent, they move data so much faster than hard drives it’s orgasmic at times, they’re tiny and easy to tuck out of the way, they’re affordable in reasonable capacities, and they don’t get very  hot, or at least current models don’t. Though all these traits will remain true for the foreseeable future, that last one might become a “legacy feature” soon with the arrival of power-hungry PCIe 5.0 SSDs. As CES 2022 draws near, manufacturers have begun announcing radical cooling products for SSDs that were seen as something as a novelty in the past, but may become more prominent as drive performance rapidly escalates.

As a refresher, PCIe 5.0 is due to hit the scene in Q1 of 2022 and it’s more than just a modest bump in spec compared to PCIe 4.0. As in the past, maximum theoretical performance is expected to double. So instead of 7Gb/s read and write speeds, the next-gen standard goes all the way to 11 14GB/s for an x4 connection, making it a huge performance leap and an instant upgrade choice for a lot of hardcore PC users.

The only problem is that upgrade might not be as easy as you think it’s going to be. Companies have already begun announcing active cooling solutions for these barn-burner drives (that is, coolers with spinning fans attached to them). This is a bit of an upgrade to the heatsinks a lot of companies have been using on their drives previously, which are typically passive and have no fans. The heatsink absorbs the heat from the drive and radiates it away via airflow inside the chassis in an effort to prevent the drives from throttling under heavy workloads.

When throttling occurs, performance is reduced, just like how the process works with CPUs, GPUs, etc. The issues with SSDs in particular, is they are placed inside a warm environment in the first place, such as next to a GPU in a desktop, or sandwiched inside a laptop, so there’s usually not a lot of cool air swirling around. Second, they can generate a lot of heat in a very small area inside the drive, which is where things like heat spreaders and such come into play.

This brings us to the newest M.2 PCIe SSD cooler from Qiao Sibo, which is an actual blower fan for your drive. This type of cooler sucks air into a chamber and then exhausts it in one direction, which is usually towards the rear of the chassis or outside the chassis in the case of GPU coolers with a similar design. According to Tom’s Hardware, the cooler mounts onto the SSD and sucks the heat into its enclosure, then exhausts it using a fan that spins at 3,000rpm at 27 dBA. The fan can move 4.81 CFM of air, which is modest but still overkill for an SSD, typically. This is because a normal workload for a home user leaves an SSD idle most of the time, with bursts of activity when the person decides to access the drive.

 

How long will it be before we need a custom loop for our SSDs? (Image: IThome)

Since PCIe 5.0 will theoretically offer double the performance of today’s drives, it will certainly require more power and thus generate more heat., So, does that mean this new generation of drives will be transformed from the lukewarm devices they are today into power-hungry, active-cooling-needing hellbeasts? The answer is is kind of murky at this time, but it is safe to say that since some of today’s fastest drives can hit 80C under a heavy workload, a logical conclusion is that the next-gen might require more robust cooling than our current options, which typically just heatsinks. If these new drives start do require something more extravagant, it could limit the number of SSDs people are able to attach to their mainboard, as one M.2 drive usually goes above the GPU, but the rest must fit in between the PCIe slots, which could prove problematic for people with thicc GPUs, or other add-in cards.

Some PCIe 4.0 SSDs already include rather beefy cooling solutions. (Image: Corsair)

Still, there is some evidence that PCIe 5.0 might not require that much more power than PCIe 4.0. For example, Samsung says its first PCIe 5.0 drive is 30 percent more efficient than the previous generation, but it doesn’t quote any numbers. Also, a company named Fadu has released info on its first PCIe 5.0 SSD, and it also has an average power rating of 5.2w, according to Tom’s Hardware. That is hardly a scenario that requires active cooling — assuming peak power generation isn’t dramatically higher.

In the end, we’ll just have to wait and see how these drives perform to draw any conclusions. Maybe SSDs will follow the same trajectory as GPUs, which started out with naked chips on a PCB, then evolved into the hulking, actively-cooled monstrosities we use today. We sure hope that’s not the case, but then again if SSDs can improve in performance over time the way GPUs have, having a separate cooler for them would actually be a fair tradeoff.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/32A0JST

NEWS TECHNOLOGIE

Virtual assistants like Alexa, Siri, and Google Assistant are supposed to help you get things done and make life at least a little easier. However, Alexa almost made life a little shorter for one family. According to Twitter user Kristin Livdahl, Alexa told her 10-year-old daughter to stick a penny in a power outlet as a “challenge.” Luckily, the parents were around to make sure that didn’t happen. 

According to Livdahl, her daughter asked the family’s Echo speaker for a challenge, and Alexa came up with something called the “Outlet Challenge.” It’s something that spread on TikTok for a bit earlier this year, encouraging the easily swayed to insert a phone charger halfway into an outlet and then touch a penny to the exposed prongs. The challenge, apparently, is to leave the penny there while sparks rain down from the plug. Alexa was even kind enough to prepare a 20-second timer for the purpose. 

Shorting a plug like this is not likely to kill you with electricity (at least in the US), but it can cause damage to the outlet and spark a fire. Police in Massachusetts issued a warning last year when two students started a fire in school while performing the Outlet Challenge. That raises the question, why would Alexa suggest this? Because virtual assistants are only as good as the data they can access, and they’re also still foolish. 

The Outlet Challenge is featured on a site called OurCommunityNow, and Alexa just grabs top search results for some queries. It’s not alone in this, but it’s still a pretty big screw-up. Any human being would know it’s a bad idea to tell an impressionable child to do something so dangerous, but Alexa and the other voice assistants haven’t cracked the code on common sense. 

Amazon reacted swiftly, with a support rep reaching out on Twitter the following day. The company says it worked to fix this error as soon as it became aware of it. Of course, someone at the company had to know that Alexa would pull random challenges from the internet. And as we all know, the internet is a terrible place. If you ask a similar query of the Google Assistant, it comes up with various custom responses and games. 

Despite its shortcomings, Amazon is still cranking out new Alexa-powered devices. It even plans to release a robot with Alexa. That might be a little alarming as a recent analysis shows that Alexa collects more data than other smart assistants. It might also trick you into burning down your house. Remember, be skeptical of any instructions provided by a machine. They don’t have your best interests at heart.

Now read:



from ExtremeTechExtremeTech https://ift.tt/3pAIwgE

NEWS TECHNOLOGIE

(Photo by Alexander Pohl/NurPhoto via Getty Images)

Apple was hit with a wave of criticism earlier this year when it announced plans to scan iPhones to stop the distribution of Child Sexual Abuse Material (CSAM). Critics fretted that Apple’s hash-checking system could be co-opted by governments to spy on law-abiding iPhone users. In response to the backlash, Apple might end up making changes to that program, but Google has its own way of spotting CSAM, and it might be even more intrusive for those who use all of Google’s cloud services. 

The specifics on Google’s CSAM scanning come by way of a warrant issued in early 2020 and spotted by Forbes. According to the filing, Google detected CSAM in Google Drive, its cloud storage platform. And here’s where things get a little weird; the warrant stemming from this report targeted digital artwork, not a photo or video depicting child abuse. 

Apple’s system under its “Expanded Protections for Children” banner uses hashes for known child abuse materials, scanning iDevices for matching hashes. This should prevent false positives and it doesn’t require Apple to look at any of the files on your phone. The issue cited most often with this approach is that Apple is still scanning your personal files on your smartphone, and it could be a privacy nightmare if someone manages to substitute different hashes. Apple says this isn’t possible, though. 

Google, as it turns out, does something similar. It uses a technique initially developed for YouTube to look for hashes of known CSAM, but it also has an AI that has been trained to use machine learning to identify new images of child abuse. It’s not clear how Google spotted the problematic files in 2020, but the unidentified individual is described as an artist. That suggests he is the one who created the drawings at issue, and Google’s systems identified it as CSAM. 

Lots of servers, at Google's Douglas County data center. Blue LEDs mean the servers are healthy, apparently

After Google’s system spotted the drawings, it sent the data to the National Center for Missing and Exploited Children, and from there it went to the DHS Homeland Security Investigations unit. Investigators filed the warrant in order to get access to the user’s data. The artist has not been identified as no charges were ever brought. However, US law holds that drawings, like those depicting child abuse, can still be illegal if they lack “serious literary, artistic, political, or scientific value.” That’s hard to prove — even agreeing on a definition of “art” can be a challenge. This may explain why there were no charges brought in this case. 

While Google’s use of AI is more aggressive than Apple’s, it’s also seemingly restricted to cloud services like Gmail and Drive. So, Google isn’t set up for scanning Android phones for hashes like Apple is on the iPhone, but Google’s approach can sweep up original artwork that may or may not be illegal, depending on who you ask. Regardless of what is “art,” Google isn’t doing this just to do it — there is an undeniable problem with CSAM on all cloud services. Google says that it reported 3.4 million pieces of potentially illegal material in 2021, and that’s up from 2.9 million the year before.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3z6nt8S

الثلاثاء، 28 ديسمبر 2021

NEWS TECHNOLOGIE

(Photo: Habitat for Humanity Peninsula and Greater Williamsburg )
Habitat for Humanity, a non-profit that builds and repairs homes in partnership with lower-income families and individuals, has officially signed over its first 3D printed home. 

Habitat for Humanity partnered with Alquist to build the 1200-square-foot house in Williamsburg, VA. Alquist, a large-scale 3D printing company, aims to make home ownership more accessible across demographics using advanced, environmentally-friendly building techniques. Not only does the company’s strategy reduce build time, but its 3D printed concrete homes are said to boast longer life expectancies than traditional wood-framed structures. Concrete walls also stand up well against tornadoes and hurricanes and help to reduce homeowners’ energy bills, as they offer better insulation than wood and drywall.

The single-family residence became April Stringfield’s home just days before Christmas. Stringfield, who works at a nearby hotel and has a 13-year-old son, purchased the home through Habitat for Humanity’s Habitat Homebuyer Program, which allows people with lower yet steady incomes purchase homes with zero interest. 

Walking by, you’d never guess the home is 3D printed. Its walls are made of layered concrete, giving the exterior a textured look almost like stucco. (Alquist’s site says each home’s interior and exterior finish is up to the homebuyer, as the company is able to produce a smooth, stucco-like, or “popcorn” finish.) The crew was able to print the home in 12 hours, reducing construction time by several weeks. Some of the home’s decorative features, like its front porch, appear to have been built using traditional methods—but the home comes with a personal 3D printer that allows Stringfield to print items like trim and cabinet knobs, should she need them in the future.

Prior to Stringfield’s move-in, the home was fitted with a Raspberry Pi-based monitoring system designed to maximize energy efficiency and comfort. Andrew McCoy, Director of the Virginia Center for Housing Research and Associate Director of the Myers-Lawson School of Construction at Virginia Tech, worked with Alquist on this long-term project. The monitoring system will track and maintain indoor environment data, enabling a handful of smart building applications designed to lower Stringfield’s energy bills. Compatible solar panels will also be installed on the home once Stringfield and her son are settled in.

“My son and I are so thankful,” Stringfield said upon receiving her new home. “I always wanted to be a homeowner. It’s like a dream come true.”

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3EzL9DM

NEWS TECHNOLOGIE

If you’re the type of web content consumer who likes your monitor rotated vertically so you don’t have to scroll as much, the new LG DualUP (28MQ780) should be right up your alley. Instead of the typical widescreen, rectangular shape we’ve been using for about 15 years, LG has chosen what is essentially a square shape for the DualUP, making it the perfect size for people who want to see an entire webpage at once, or presumably for people who write code for a living. It’s essentially like taking two small monitors and stacking them vertically on top of one another.

Historically, most monitors used a 4:3 aspect ratio, like television. Roughly 15 years ago, widescreen displays which offered 16:10 and 16:9 aspect ratios became more popular. 16:9 has dominated the industry for years, but we’ve seen a few other ratios pop up again recently, including 21:9, some 16:10 panels and devices like Microsoft’s Surface family, which typically offer panels in a 3:2 ratio. This new panel is a 16:18 ratio.

The monitor boasts a resolution of 2560×2880, which LG has christened “Square Double QHD.” The 27.6″ IPS panel has rather modest specs, however, boasting a very average 1,000:1 contrast ratio and just 300 nits of brightness, but thanks to its Nano IPS technology it covers 98 percent of the DCI-P3 color gamut. It also has a pedestrian 5ms grey-to-grey response time. Still, LG says its monitor “offers the same screen real estate as two 21.5-inch displays and has a vertical split view function that lets users see more in one glance.” Clearly, this model is all about the aspect ratio and ability to multi-task versus something with next-gen specifications or glorious HDR capabilities, though it does support HDR10. Also, gaming on this would be….interesting, but is probably not recommended.

LG’s DualUP display is equivalent to two 21.5″ panels stacked vertically.

Another interesting feature of the DualUP is it includes a mounting arm that looks similar to a VESA stand. According to LG, the monitor arm “elevates user comfort with the ultra-adjustable LG Ergo stand which saves space as it clamps securely to most desks and tables.” LG also notes that since you don’t have two monitors side-by-side like a traditional multi-monitor setup, the DualUP can reduce neck strain, which sounds plausible. Other specs include USB-C with 96w of power and data, so one cable could power the entire thing. There’s also two HDMI ports, DisplayPort, a USB hub, and onboard 7w speakers. It’s not clear from the press release which generation of HDMI or USB is on offer, however.

So far LG hasn’t released pricing for the DualUP, but it will probably do so at the upcoming CES trade  show in January. We would probably still prefer our ultrawide gaming monitor personally, just because it works for us and lets us enjoy some awesome gaming after (or during, ahem) our work day, but if we needed a monitor just work, the DualUP seems like an effective solution. One interesting side note is we wrote almost a year ago about how “alternative” aspect ratios to 16:9 were coming down the pike, but we were only talking about 3:2 and 16:10 back then, so this 16:18 is panel is definitely a surprise, and maybe a sign of things to come.

Now Read:



from ExtremeTechExtremeTech https://ift.tt/3z6ug2f