Hacker Newsnew | past | comments | ask | show | jobs | submit | more jfkebwjsbx's commentslogin

Sorry but as someone that has done both C# in Visual Studio and embedded development in vim/gdb/UNIX, your claims do not hold up...

> have no comparable analogue in traditional text-focused, non-IDE editors.

The same features have been there in TUI editors for many, many years. There is nothing visual about the backend that handles those queries.

Whether the interface is easier to use or not is a different matter, but the features have been there for decades.

> Not to mention visual debuggers

Well, yes, visual debuggers are better with a GUI, that is obvious.

> not to mention remote debugging!

Remote debugging is very common. What you are thinking of is visual ones.

> However, objective tests show that even highly skilled users aren't as fast at completing high-level tasks as mouse-users.

Citation needed.

Blender, for instance, is praised by artists for its hard modeling productivity thanks to all the shortcuts.

A few years ago when I worked in VS, I used all kinds of shortcuts. It does not matter that the tool has a GUI or not to make the keyboard useful. It is orthogonal.

> took days or weeks to solve the same problem! Days!

That is just nonsense. There is no way one takes days/weeks to "debug" something that takes 30 minutes in another general purpose language.

Either the task could be trivially solved using the .NET standard library and not the others (which makes it incomparable), or they picked up participants with no clue in the other languages.

The debugger or the GUI/TUI distinctions have nothing to do with that. A visual debugger is most useful when dealing with very big code bases that you did not write yourself. For a programming puzzle that fits in 100 lines, a visual debugger is irrelevant unless you are new to programming.


An IT degree is not about writing code. Perhaps you are referring to CS.


I don't like the constant UI changes, the ugly configuration menus, the new bar and the bundling of stuff nobody asked for like Pocket.


Yes, you are missing something.

Two threads reading and writing to the same memory area do not necessarily give problems. In fact, many software is built to exploit several facts about how memory accesses work with respect each other.

ARM processors give very few guarantees, so code has to workaround that.


Amazon marketing claims are not something you should trust.


Disclosure: I work at AWS on build cloud infrastructure

It's good to be skeptical. I always encourage folks do experiments using their own trusted methodology. I believe that the methodology that engineering used to support this overall benefit claim (40% price/performance improvement) is sound. It is not the "benchmarketing" that I personally find troubling in industry.


We can measure power consumption, heat, performance, and buy on retail price for physical hardware.

But We can't measure power consumption/heat, possibly noisy neighbor exists while benchmarking, and can't know real price for cloud instance. I don't blame but it's difficult to comparing a hardware.


I always frame it this way: if there were a 40% price/perf improvement, why is not everyone (including AWS!) using ARM clouds?


We /are/ using them. :-)


Phones are not built for performance, which is what was asked about.

As for supercomputers, >90% of them are Intel/AMD.


Phones are built for performance per watt. Phones are benchmarked. In the context of a discussion on Apple introducing ARM chips into the Macbook line, performance per watt is far more meaningful. For most users, battery life is the issue once minimum performance criteria have been met.

Will there be Razor laptops that last less than an hour on battery that can beat them? Sure.

Will there be people who complain that the Mac isn't fast enough when plugged in? Already happening: the recent Macbook Pros have had complaints about thermal throttling, that obviously slightly larger Dell with a decent fan doesn't have.

But Apple will build performance laptops, using ARM chips, and they will be faster than the equivalent Intel Macbooks if only because they aren't throttled.


The context of the discussion is literally ARM scaling up to desktop performance.

The person you replied to said:

> There is no proof that these outperform traditional CPUs at all.

To which you replied talking about embedded market share and supercomputer which have nothing to do with that.

Since now you mention Apple and MacBooks, which haven't been even mentioned, I think you are answering to the wrong thread/post.


It started as a memcached with persistence to disk for fast startup.


No, it was always about data structures.

Lists, hashes, sets and such that you have available in your programming language, but available as a networked server so any language/app/process can access the same data using the same useful structures.


Only in a subset of cases, which is the problem: you cannot simply always use it like the previous extensions.


None of the previous extensions could be used blindly, either. It was a while before people figured out how to use MMX or SSE well, and people still often find that the scalar version of an algorithm beats their vector version.


I am not talking about ease of use, but about the downclock.

The other extensions do not trigger it, not even AVX256.

With AVX512 is not always a win, and you don't even know until you try in particular hardware.


The 256-bit vector instructions do trigger a downclock, but not as severe as the AVX512 downclock.


You are 100% right, it is AVX1 I was thinking about (which nowadays I am not sure if that has downclock or not either).



I don’t think that applies to modern AMD processors, though.

Agner’s microarchitecture.pdf says about Ryzen “There is no penalty for mixing AVX and non-AVX vector instructions on this processor.”

Not sure if it applies to Zen 2 but I’ve been using one for a year for my work, AVX 1 & 2 included, I think I would have noticed.


AMD processors used to implement AVX instructions by double-pumping them, using only 128-bit vector ALUs. This means there's no clock penalty, but there's also no speedup over an SSE instruction by doing so. I don't know if this is still the case with the newest µarchs though.


> but there's also no speedup over an SSE instruction by doing so

Just because they are split doesn’t mean they run sequentially. Zen 1 can handle up to 4 floating point microops/cycle, and there’re 4 floating-point execution units, 128-bit wide / each (that’s excluding load/store, these 4 EUs only compute).

Native 256 bit are even faster due to less micro-ops and potentially more in-flight instructions, but I’m pretty sure even on Zen 1 AVX is faster than SSE.


It depends. If EUs are actually the bottleneck then doing SSE or AVX wouldn't have any different in speed in such case.

However, when instruction decode/retire is the bottleneck, AVX can be faster. I remembered this can be the case on Intel Sandy Bridge (first-gen AVX, double pumped, retire 3 instructions/cycle), where AVX can sometimes be faster (usually it's not that different)

With recent CPUs from both Intel/AMD able to at decode/retire at least 4 instructions per cycle this really cease to be the case.


> AVX can be faster

Yes. Another possible reason for that is instructions without SSE equivalents. I remember working on some software where AVX2 broadcast load instruction helped substantially.


Why link to a 5 year old thread? There has to be more recent work.


There is more recent work. This blog post by Travis Downs is the most detailed analysis of transition behavior I've seen: https://travisdowns.github.io/blog/2020/01/17/avxfreq1.html

For general guidelines on when to use AVX-512, this (older post) remains best guide I've seen: https://lemire.me/blog/2018/09/07/avx-512-when-and-how-to-us...


So, are programs that are compiled with those instructions faster or slower? In my experience they have been faster.


Short answer: Yes, faster. Long answer: It depends, and you may be measuring the wrong thing.

Among other things, it depends on the workload and the exact processor. You can find plenty of cases where AVX512 makes things faster. You can also find cases where the entire system slows down because it is running sections of AVX512 code here and there—apparently, for certain Intel processors, the processor needs to turn on top 256 bits of the register files and interconnects, and to get full speed for AVX512 it will alter the processor’s voltage and clock speed. This reduces the speed of other instructions and even other cores on the same die (which may be surprising).

While the specifics may be new, the generalities seem familiar—it has long been true that a well-intentioned improvement to a small section of your code base can improve performance locally while degrading overall system performance. The code that you’re working on occupies a smaller and smaller slice of your performance metrics, and meanwhile, the whole system is slowing down. There are so many reasons that this can happen, dynamic frequency scaling with AVX512 is just one more reason.


> apparently, for certain Intel processors

Not certain Intel processors: all of them. The CPU will reduce its clocks, reducing its overall performance.

Using AVX256 or AVX512 is easily a net negative on performance, depending on the software, input data and other processes running in the system.

Unless you are certain you have enough data to offset the downclock, don't use them.


Different causes, similar consequences.


Porting SSE to AVX code (with equivalent instruction and proper vzeropper) will increase performance in most case (the only case where it can be slower, on top of my head, is on Sandy Bridge). The same is not true for AVX to AVX512.


It will increase performance if you have sufficient amount of dense data on input.

When that’s the case, especially if the numbers being crunched are 32-bit floats, there’s not much point of doing it on CPU at all, GPGPUs are way more efficient for such tasks.

However, imagine sparse matrix * dense vector multiplication. If you rarely have more than 4 consecutive non-zero elements in rows of the input matrix, and large gaps between non-zero elements, moving from SSE to AVX or AVX512 will decrease the performance, you’ll be just wasting electricity multiplying by zeros.


So in some sense very similar to SKX behavior? The first iteration of the instruction implementation requires judicious use of instructions, while later implementations (this is something to be upset about...those "later implementations" should have been available quite some time ago).

This is also ignoring the fact that none of these penalties come into play if you use the AVX512 instructions with 256-bit or 128-bit vectors. (This still has significant benefits due to the much nicer set of shuffles, dedicated mask registers, etc.)


AVX to AVX512 will "increase performance in most case"s. https://www.researchgate.net/figure/Speedup-from-AVX-512-ove...


When those projects started, not much. They offered Git storage and an issue list, pretty much.

Then they started to grow, specially GitLab in the beginning, and now try to be a single centralized solution for all software development needs like Atlassian, IBM and others had always been selling.


I genuinely ask: why is "a sense of community" important in a productivity tool?

I have never cared about all the "social features" that work tools seem to always end up introducing...


Software is a collaborative effort, and the easier you make it to collaborate and share information with others, the better. Imagine @ing a GNOME dev in a different instance to comment on a bug report in KDE, or seeing an newsfeed update that your favorite framework is making a breaking change in the next release, or opening a PR/MR in another project without having to create _yet another_ account.

You don't need to use any of those, but I think that the large community is partly what makes GitHub such a valuable tool.


> Imagine @ing a GNOME dev in a different instance to comment on a bug report in KDE

How do you see this being different from providing that dev with a hyperlink to the bug report? In either case, the developer is made aware of the issue, no?

> or seeing an newsfeed update that your favorite framework is making a breaking change in the next release

I don't actually know if changelists have RSS feeds, but supposing they did, couldn't you subscribe your reader to those feeds to achieve this result?

> or opening a PR/MR in another project without having to create _yet another_ account.

Yeah, you got me here. Though I'm leaning towards "build a physical key that automates account creation everywhere" so you still have a zillion accounts under the hood, but that's mostly transparent to the user. Sort of like Facebook/Google SSO, but instead of storing data in one data-hungry corp's DB, you're generating essentially random data in one place (your physical key) and distributing it across zillions of little DBs, thus reducing the incentive for hackers to try to obtain any of them.


It's different from a hyperlink in that you don't need to potentially log into a different project's infrastructure to share the information.


This is a good answer, but the feature you mention sounds to me like it is useful because it is a productivity feature (avoiding time creating accounts and switching tabs in a browser), not really used for socialization.

Like when telephone lines were introduced, they were a massive boost to productivity. Even if they could be used as a social feature within companies, it was not why they were useful there.


Not really. What you're talking about is OpenId/some sort of OAuth. The parent comment is talking about mixing the activity from a bunch of projects into one.

Of course gitlab projects do have RSS feeds, so you could use that I guess.


OpenID etc. do not avoid having to switch tabs into another site.


I have use GitHub to host some of my professional work that is publicly available, and I do use it primarily as a productivity tool with the benefit that I can link other people to it if they need to.

However, I find the "sense of community" features on GitHub to be really important, because I also do a lot of unrelated open-source work as a hobby. In those areas, I'm able to follow people who are coding things similar to mine. I'm able to see when they create a new project, and seeing their stars often leads me to new tools that I find useful. I'm hopeful that the people that follow me or my repositories feel similarly. A sense of community helps to make me enjoy the work I'm doing a little more.

It's kind of like running into someone who's looking through the same section of the hardware store as you. I'm not going to the store to talk with them, but if I end up having a nice interaction with someone who's working on something similar to me or having the same problem I am, it usually brightens my day a little bit.


If you're an open-source project, the community aspect allows a better engagement of ... community, that is devs, users. For devs that means reviews and interactions, for users - issue reporting, maybe some support.

Integration of community features improves visibility and situational awarness. Compare this to emails or IRC, or forums.

Of course, effectiveness of this is as good as the ability to manage the vast information that get generated in open projects like this. What's the point of having everyting in one place if it's hard for users to find the information they need, or if no one is able to properly care for the tons of issues raised from the community... Anyways, GitLab is a tool that helps one organize and tie together these streams of project information.


>Integration of community features improves visibility and situational awarness. Compare this to emails or IRC, or forums.

What do you see being some benefits over the older tech? Because when I think "community features" I'm picturing something like Tweetdeck for devs (which, in fairness, may be completely different from what you're picturing). Basically, this repo I'm watching had these updates. That sounds like the same thing I get in email from an issue tracker, complete with comments from others watching.


> What do you see being some benefits over the older tech?

Think of a remote-first collaboration arrangement. In such scenario the concentrated communications become vital for collaboration efficiency.

'Old' approaches could work too, emails can be typed and sent with a right set of CC, also IRC can be set up for chats, teams can roll their own messaging tool of choice.

It's all about policies and consistency in adhering to them. When these features are devised together as part of a tool, it offers to a client those policies out of box. Thus, feature planning or collaborating on a merge request becomes more transparent, perhaps more real-time, when the tool supports the 'social' features.

Just a feature as simple as "@user" mention notifications may promote the level of collaboration.


Right? If I wanted to deal with people I wouldn't have become a computer programmer.


It is not about being social or not (I am a quite social individual), but about the context that I am wondering.

When I am working, I am not trying to be social but to be professional.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: