That would be easier if both GPU and display manufacturers weren't eschewing newer DisplayPort versions for older versions with DSC (which is not lossless despite its subjective claims of being "visually lossless"), while building in newer HDMI versions with greater performance.
To be fair, the DisplayPort 2.0/2.1 standardisation process was riddled with delays and they ended up landing years after HDMI 2.1 did. It stands to reason that hardware manufacturers picked up the earlier spec first.
what resolution is it that you can drive with "newer HDMI versions" but you cannot drive with DisplayPort 1.4 w/o DSC? The bandwidth difference is not really that much in practice, and "newer HDMI versions" also rely on DSC, or worse, chroma subsampling (objectively and subjectively worse).
I mean, one has been able to drive 5K, 4K@120Hz, etc. for almost over a decade with DP1.4, for the same res you need literally the latest version of HDMI (the "non" TDMS one). It's no wonder that display screens _have_ to use the latest version of HDMI, because otherwise they cannot be driven from a single HDMI port at all.
Having monitors that supported its native resolution through DP but not HDMI used to be a thing until very recently.
There are a lot of PC boards where the iGPU only has an HDMI 2.1 output, or with a DP1.4. But DP1.4 doesn't support some of the resolution/refresh combinations that HDMI 2.1 does. Normally this doesn't matter, but it could if you have, for example, the Samsung 57 inch dual 4K ultrawide.
The iGPU on my 9950X is perfectly capable of driving my Dell U4025QW 5k2k ultrawide. Yeah it would suck for any modern 3D games, but for productivity or light gaming it's fine.
It requires I use the DisplayPort out on Linux because I can't use HDMI 2.1. Because the motherboard has only 1 each of DisplayPort and HDMI this limits my second screen.
It works fine with intel and amd igpu's. They won't run many games at the native resolution though. Doesn't really matter to me, as the igpu's are in work laptops for me, so 60hz or better passes for "adequate".
Even a raspberry pi 4 or newer has dual 4k outputs, that can fill the entire screen at native resolution. Macs have been the worst to use with it so far.
"Just don't support the majority of consumer displays" isn't really an acceptable solution for an organization attempting to be a player in the home entertainment industry.
>
"Just don't support the majority of consumer displays" isn't really an acceptable solution for an organization attempting to be a player in the home entertainment industry.
I would recommend Valve to create an official list of consumer displays that ("certified by Valve") do have proper support for the most recent version of Display Port with support for all features relevant to gaming.
This way gamers know which display to buy next, and display vendors get free advertising for their efforts that is circulated to an audience that is very willing to buy a display in the near future.
the problem only affect a subset of HDMI 2.1 features, not HDMI 2.0
but the steam machine isn't really super powerful (fast enough for a lot of games, faster then what a lot of steam customers have, sure. But still no that fast.)
So most of the HDMI 2.1 features it can't use aren't that relevant. Like sure you don't get >60fps@4K but you already need a good amount of FSR to get to 60fps@4k.
Just because the Steam Machine isn't powerful enough to support high framerates in modern AAA games doesn't mean it can't do so with older or less graphically-intensive games.
VRR and HDR are presumably the biggest issues, because HDMI 2.0 should already have enough bandwith to support 8-bit 2160p120 with 4:2:0 chroma subsampling, which should work fine for most SDR games, and 144 Hz vs 120 Hz is, in my experience at least, not noticeably different enough to be worth fussing over.
Some people will want to use their Steam Machine as a general-purpose desktop, of course, where RGB or 4:2:2 is nonnegotiable. Though in this case 120 Hz — or 120,000/1001 Hz, thanks NTSC — is, again in my experience, superior to 144 Hz as it avoids frame pacing issues with 30/60 Hz video.
Aren't DP-HDMI adapters good enough for the majority of consumers? On my ancient (2017) PC with integrated graphics I can't tell a difference between the DP out vs the HDMI out.
The article mentions that the Club3D adapters don't exist anymore (=the popular ones), only off-brand alternatives. VRR is not officially supported via adapters, a big problem for a gaming device.
I have it, it does not. Well, it may. It depends on the firmware you install on the cable. Depending on the firmware, different things will be broken. I tried them all. There's no version that will consistently support 2160p@120 and 4:4:4/RGB and HDR and VRR, and without random handshake issues.
I frequently see comments that say the TV companies are the ones getting the royalties, so I looked it up.
According to Gemini, the royalties go to the _original_ HDMI founders. That includes Sony, Panasonic, Philips, and Toshiba. It does not include Samsung, or LG.
There's no financial incentive. No other mass consumer device besides PCs use DisplayPort, heck, even PCs generally have an HDMI port. So the percentage of TV buyers who actually need to use DisplayPort (basically Linux users) would be a very very very small minority.
I'd assume if they aren't part of HDMI cartel as the above post suggests, they are paying patent fees for this garbage.
And they are in a good position to unblock this situation by increasing adoption of patent free alternatives, therefore I don't see why they wouldn't have an incentive to avoid paying.
So I'd rather see them as somehow complicit then, instead of having no incentive in this case.
They have to pay the fees regardless, since no TV would sell if it didn't have an HDMI port. So unless the TV manufacturers can also convince set-top box makers, game console manufacturers, Blu-Ray makers etc to include DisplayPort, they'll need to continue including an HDMI port.
So this needs to be an industry-wide switch, not just TV makers.
For now, but that doesn't stop them from nudging things in the direction where HDMI will become obsolete by doing their part. I.e. it's not an instant thing, but each step in that direction helps and they can make a pretty significant one.
So the argument of no incentives just doesn't make sense, but it's a gradual process to get there. Unless their bean counters only understand super short term incentives. Then they should be blamed too for why things aren't improving in this regard.
The incentive seems very thin/weak. Pay extra now to push DP adoption and hope that in ~10-15 years you can drop the HDMI port? Meanwhile you still pay the cartel, and they invest your money directly against your interests. And it all hinges on predicting consumer adoption which is nearly impossible. I honestly don’t see how they could justify making such a step in that direction let alone a significant one.
For DP adoption it's too late. They should push for USB4 / Thunderbolt 4 instead. We are in the phase where about every new laptop has USB4. Connecting your laptop/phone to a TV might be a selling point. I'd love that for hotel TVs.
That's a catch 22 / circular argument that can always be used to excuse inaction, but it's not a real argument. Yes, it's a long term problem to solve and has many moving parts. But if they don't solve their part, they are only slowing it down even more. Any contribution to move things forward moves things forward, and lack of it delays things.
I.e. if you are saying "we feed the cartel, let's not do anything about it, since doing anything will only potentially help later, so we still need to feed the cartel in in the interim" doesn't really stand any argument grounds. I.e. feed the cartel and do nothing is worse than feed the cartel and do what you can to stop that over time.
And their piece of this is pretty big (huge portion of TV market), that's why they in particular should be asked more than others, why they aren't doing their part.
It's not so much that it's a catch 22, its that there's no financial incentive for them. TVs are a low margin item already, and Samsung/LG get their margin by being brand names and advertising fancy features.
I doubt they would meaningfully save money over investing in DP, and the opportunity cost is greater for them to spend that money on the next "Frame" TV or whatever.
LG, Samsung and Sony are the only actual panel manufacturers and they probably bake those license fees into the panels they sell back to HDMI Forum.
May be, but by not solving the problem, they become part of the problem, even if they aren't part of HDMI cartel directly. So it's their fault too problems like above happen.
That doesn't explain why they wouldn't want to get rid of HDMI to avoid paying patent fees for it. Adding USB 4 / DP to their TVs is a major step in that direction.
If you think this is proof of it being true, then I am both worried and astonished. How about looking for the information yourself, instead of relying on LLMs? This is HN I thought?!
Please don't post random LLM slop on HN, there's more than enough of it on the internet as is. The value of HN is the human discussion. Everyone here is capable of using an LLM if they so desire.
They should make pedestrian-only streets in most dense places of Manhattan and use these money to improve public transportation. Even just a few blocks of no cars would make a huge difference for livability of the city center.
There's a large (long time) movement to do this to lower Manhattan, the most public transit connected area in the US (probably North America, definitely up there in the world). It's getting pick up again.
We use the tree-sitter[1] for parsing C declarations in Rizin[2] (see the "td" command, for example). See our custom grammar[3] (modified mainstream tree-sitter-c). The custom grammar was sadly necessary, due to the inability of Tree-Sitter to have the alternate roots[4].
Code correctness should be checked automatically with the CI and testsuite. New tests should be added. This is exactly what makes sure these stupid errors don't bother the reviewer. Same for the code formatting and documentation.
This discussion makes me think peer reviews need more automated tooling somewhat analogous to what software engineers have long relied on. For example, a tool could use an LLM to check that the citation actually substantiates the claim the paper says it does, or else flags the claim for review.
I'd go one further and say all published papers should come with a clear list of "claimed truths", and one is only able to cite said paper if they are linking in to an explicit truth.
Then you can build a true hierarchy of citation dependencies, checked 'statically', and have better indications of impact if a fundamental truth is disproven, ...
Could you provide a proof of concept paper for that sort of thing? Not a toy example, an actual example, derived from messy real-world data, in a non-trivial[1] field?
---
[1] Any field is non-trivial when you get deep enough into it.
hey, i'm a part of the gptzero team that built automated tooling, to get the results in that article!
totally agree with your thinking here, we can't just give this to an LLM, because of the need to have industry-specific standards for what is a hallucination / match, and how to do the search
One could submit their bibtex files and expect bibtex citations to be verifiable using a low level checker.
Worst case scenario if your bibtex citation was a variant of one in the checker database you'd be asked to correct it to match the canonical version.
However, as others here have stated, hallucinated "citations" are actually the lesser problem. Citing irrelevant papers based on a fly-by reference is a much harder problem; this was present even before LLMs, but this has now become far worse with LLMs.
Yes, I think verifying mere existence of the cited paper barely moves the needle. I mean, I guess automated verification of that is a cheap rejection criterion, but I don’t think it’s overall very useful.
this is still in beta because its a much harder problem for sure, since its hard to determine if a 40 page paper supports a claims (if the paper claims X is computationally intractable, does that mean algorithms to compute approximate X are slow?)
Sage Math? Though I admit, unlike homogeneous Mathematica, it's just a Python glue on multiple smaller projects of different quality and poorly integrated. I wish there was something more like the Wolfram software but there isn't.
I quite like Sage. Python is a much better language than Wolfram (yes, he named it after himself...). In Wolfram, there is no real scoping (even different notebooks share all variables, Module[] is incredibly clumsy), no real control flow (If[] is just a function), and no real error handling. When Wolfram encounters an exception, it just prints a red message and keeps chugging along with the output of the error'd function being replaced by a symbolic expression. This usually leads to pages and pages of gibberish and/or crashes the kernel (which for some reason is quite difficult to interrupt or restart). Together with the notebook format and the laughable debugger, this makes finding errors extremely frustrating.
The notebooks are also difficult to version control (unreadable diffs for minor changes), and unit testing is clearly just an afterthought. Also the GUI performance is bad. Put more than a hand full of plots on a page, and everything slows to a crawl. What keeps me coming back is the comprehensive function library, and the formula inputs. I find it quite difficult to spot mistakes in mathematical expressions written in Python syntax.
Different languages are better at different things, so it rarely makes much sense to say that one language is better than another in general. Python is definitely much better than Mathematica for "typical" imperative programming tasks (web servers, CLI programs, CRUD apps, etc.), but Mathematica is much better at data processing, symbolic manipulation, drawing plots, and other similar tasks.
> there is no real scoping (even different notebooks share all variables, Module[] is incredibly clumsy)
Scoping is indeed an absolute mess, and the thing that I personally find the most irritating about the language.
> no real control flow (If[] is just a function)
You're meant to program Mathematica by using patterns and operating on lists as a whole, so you should rarely need to use branching/control flow/If[]. It's a very different style of programming that takes quite a while to get used to, but it works really well for some tasks.
> no real error handling
For input validation, you should use the pattern features to make it impossible to even call the function with invalid input. And for errors in computation, it often makes the most sense to return "Undefined", "Null", "Infinity", or something similar, and then propagate that through the rest of the expression.
> The notebooks are also difficult to version control (unreadable diffs for minor changes)
Mathematica notebooks tend to do slightly better with version control than Jupyter Notebooks, although they're both terrible. You can work around this with Git clean/smudge filters, or you can just use ".wls"/".py" files directly.
For writing production code, I find good scoping rules non-negotiable. And error handling, monitoring etc has to be well thought out before deploying at scale.
So as great as Mathematica sounds for interactive math and science computations, sounds like a poor tool for building systems that will be deployed and used by many people.
> So as great as Mathematica sounds for interactive math and science computations, sounds like a poor tool for building systems that will be deployed and used by many people.
Yes, I definitely agree there. Mathematica is definitely great for interactive use, but I'm not really aware of anyone aside from Wolfram himself who tries to deploy it at scale.
That is a fair assessment. By and large it is used for the former. It is super handy in the exploratory phase of certain kinds of mathematical research.
How is "If" as a function even a drawback? It is largely seen as something desired, no? I would see that as a huge advantage, which allows for very powerful programming and meta-programming techniques.
One potential issue is that unlike most other languages, it doesn't create a new scope. But almost nothing in Mathematica introduces a new scope, and Python also uses unscoped "if"s, so it's rarely much of a problem in practice.
But with pattern matching, you almost never need to use "If[]" anyways:
On the note of Jupyter notebooks and version control - there was a talk at this year's Pycon Ireland about using a built in cleaner for notebooks when committing the JSON (discard the cell results), and then dropping the whole lot into a CI system utilising remote execution (and Bazel or similar) to run and cache the outputs. Was a talk from CodeThink. No video up yet though. Scenario was reproducible notebooks for processing data from a system under test.
> On the note of Jupyter notebooks and version control - there was a talk at this year's Pycon Ireland about using a built in cleaner for notebooks when committing the JSON (discard the cell results)
Yup, I use a long "jq" command [0] as a Git clean filter for my Jupyter notebooks, and it works really well. I use a similar program [1] for Mathematica notebooks, and it also works really well.
This is not true. Mathematica has the concept of contexts. You can have each notebook have it's own unique context. Mathematica Packages create their own context too, we are not talking about module's here which are useful for local variable scoping. Packages and contexts lead to the isolation you are looking for. These are things that have been around since the initial Mathematica 1.0 in 1988 (!). https://reference.wolfram.com/language/ref/Context.html
Fully agreed. I have never seen a programming language which is so badly designed as Wolfram. I really wish there was another way to access all of Mathematica's functionality with a more sane interface.
I've used Sage for years to run the backend (calculations/computations/graphics/prototyping) for a multivariable calculus class I teach. It's not perfect, but as a lightweight, Python-style CAS to do all sorts of "standard" calculations, it's very easy to use!
I tried Sage Math. Just the fact that one has to declare all variables before using them, makes it extremely annoying. In Mathematica, I frequently do computations which have a couple of dozen variables. I am not going to write boiler plate for 20 different variables in every notebook.
It's because of that youth exodus (to Australia mainly) that the government is pushing for Gelephu Mindfulness City as place for innovation and new business opps. That's what the gov't officials directly argued when asked.
Biggest problem of SourceHut that should be solved first before mass migration of open source - lack of the organizations that would allow multiple contributors working on the project, especially the project with multiple repositories.
reply