• seven_phone@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    20 days ago

    To be fair I think it is happening to Google as much as it is to everyone else, we are running down a hill and are going too fast to stop.

  • Sir Arthur V Quackington@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 days ago

    Tech companies don’t really give a damn what customers want anymore. They have decided this is the path of the future because it gives them the most control of your data, your purchasing habits and your online behavior. Since they control the back end, the software, the tech stack, the hardware, all of it, they just decided this is how it shall be. And frankly, there’s nothing you can do to resist it, aside from just eschewing using a phone at all. and divorcing yourself from all modern technology, which isn’t really reasonable for most people. That or legislation, but LOL United States.

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 days ago

      Tech companies don’t really give a damn what customers want anymore.

      Ed Zitron wrote an article about how leadership is business idiots. They don’t know the products or users but they make decisions and get paid. Long, like everything he writes, but interesting

      https://www.wheresyoured.at/the-era-of-the-business-idiot/

      Our economy is run by people that don’t participate in it and our tech companies are directed by people that don’t experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      20 days ago

      Tech companies don’t really give a damn what customers want anymore.

      Most of the time customers don’t know what they want until you give it to them though. People don’t know they want something when they don’t know it exists. A perfect example using AI - DLSS. Probably no one would have wanted their games to be rendered at a significantly lower resolution and then have AI recreate 3/4 of the pixels to get it back up to their regular resolution - yet when it came out it was one of the biggest game changers in gaming history, and is now basically universally agreed upon as the default way to do game development going forward.

      And frankly, there’s nothing you can do to resist it

      Vote with your wallet. Make your opinion known. If you’re just a vocal minority then no, it likely won’t make a difference - but if enough people do it, it will. More people need to understand that while they have an opinion, it might not be the majorities opinion and it might be “wrong”.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        20 days ago

        And it’s fucking awful.

        People didn’t “want it” neither before nor after it was forced into being a thing, people had no choice because of GPU prices, especially console peasants stuck with their AMD APUs on par with like a GTX 1070 where a middleman built their PC for them under £600 + hundreds in PS Plus/game fees over years to come.

        DLSS is even worse cancer than TAA, the washed out blurry slop only looks good on YouTube videos due to the compression. It’s one thing if you’re playing in the extremes of low performance and need a crutch, e.g. steam deck, it’s a whole other when you make your game look like dog shit then use fancy FXAA and motion blur to cover it up so you can’t see.

        I agree with you on making the personal choice to steer away from megacorps, and I practice this myself as much as I can, but it hasn’t ever worked en-masse and I don’t expect it will, nor do I expect people will have much choice as every smaller company will do what every big company does and AI will be integrated in such small ways, like all the ways it has pre-Covid pre-AI spring that people will use it unknowingly and love it.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          20 days ago

          And it’s fucking awful.

          DLSS? No way lol. DLSS often gives better image quality than native resolution, and gives you a choice in image quality vs performance increase options. It’s a god send.

          DLSS is even worse cancer than TAA

          You’ve clearly never used DLSS, at least not DLSS3 or 4. I’ve got a 4070 Super and Ryzen 7 and I use DLSS by choice literally every time it’s available.

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            18 days ago

            Lolwut? No it doesn’t? Yeah it turns off TAA so it might look sharper at first, and if you turn off the ugly ass sharpening then it’s playable but literally any other option looks better than TAA, including TXAA from early 2010s lol.

            Do you maybe mean DLAA? I Have an RTX 3090 and a 9800X3D. It’s ok. When the option exists I just crank up the res or turn on MSAA instead. Much better.

            If you mean DLSS, my condolences. I’d rather play with FXAA most of the time.

            The only game I’ll use DLSS (on Transformer model+Quality) in is CP2077 with Path Tracing. With Ray Reconstruction it’s almost worth the blurriness, especially because that game forces TAA unless you use DLAA/DLSS and I don’t get a playable framerate without it, but also don’t want to play without Path Tracing. Maybe one day I’ll have the hardware needed to run it with PT and DLAA

            • FreedomAdvocate@lemmy.net.au
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              20 days ago

              What are you talking about “temporal+quality” for DLSS? That’s not a thing.

              DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

              FXAA is not an AI upscaler, what are you talking about?

              • LainTrain@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                19 days ago

                What are you talking about “temporal+quality” for DLSS? That’s not a thing.

                Sorry I was mistaken, it’s not “temporal”, I meant “transformer”, as in the “transformer model”, as here in CP2077.

                DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

                Let me explain:

                No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it’s being upscaled to.

                By it’s very nature, the ML model is just “guessing” what the frame might look like if it was rendered at native resolution. It’s not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it’s a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.

                It’s mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren’t a good reference point because they don’t represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.

                Even if it seems like there’s “added detail”, any “added detail” is either literally just an illusion stemming from the sharpening post-processing filter akin to the “added detail” of a cheap Walmart “HD Ready” TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by “better” we agree that it’s the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.

                FXAA is not an AI upscaler, what are you talking about?

                I mention FXAA because really the only reason we use “AI upscalers” is because anti-aliasing is really really computationally expensive.

                The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game’s graphics are usually completely detached from this like e.g. texture resolution.

                The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.

                Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.

                So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we’re pushing.

                The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.

                We can’t do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.

                But it’s also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don’t exist, and if they did they’d cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.

                One other solution is to blur these jagged edges out, sacrificing detail for a “smooth” look.

                This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn’t push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.

                SMAA fixed some of FXAA’s issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.

                Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.

                In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye’s natural focus, with eye tracking tech, this actually works really well.

                But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.

                DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.

                But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it’s just an upscale. That’s okay, because that’s not the point. The purpose of DLSS isn’t to boost quality, it’s to be a crutch for low performance, it’s why turning off even Quality presets for DLSS will often tank performance.

                There is one situation where DLSS can look better than native, and it’s if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.

                In Nvidia settings I believe this is called DL DSR factors.

                • FreedomAdvocate@lemmy.net.au
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  19 days ago

                  I don’t even know where to begin, so much wrong with this. I’ll have to come back when I’ve got more time.

  • sartalon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    20 days ago

    Google has gotten so fucking dumb. Literally incapable of performing the same function it could 4 months ago.

    How the fuck am I supposed to trust Gemini!?

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      I find this current timeline so confusing. Supposedly we’re going to have AGI soon, and yet Google’s AI keeps telling you to stick glue on pizza. How can both things be true?

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 days ago

        It’s the same reason why they removed the headphone jacks from phones. They don’t want to give you a better product, they want you to force youbto use a product, even if it’s worse in all aspects

        • Novaling@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          19 days ago

          Whoa don’t come for Bluetooth like that. I like not having tangled wires and janky earbuds/headphones, especially because my clumsy ass used to snap the cords all the time by accident.

          I do agree though that we should get the choice to use headphone jack or bluetooth. I also miss having a jack since I have to use my charging port to connect to my car radio…

          Edit: My comment is an implication that I want phones with headphone jacks. I know that phones have headphone jacks and bluetooth. Why am I getting downvoted?

          • cartoon meme dog@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 days ago

            There are some outlandish rumours that it’s possible for a device to have… both Bluetooth and a headphone jack.

            • ZILtoid1991@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              18 days ago

              My previous phone was like that. And had a better DAC that some of the cheaper converters.