> So the question is, can you hide the formal stuff under the hood, just like you can hide a calculator tool for arithmetic? Use informal English on the surface, while some of it is interpreted as a formal expression, put to work, and then reflected back in English?
The problem with trying to make "English -> formal language -> (anything else)" work is that informality is, by definition, not a formal specification and therefore subject to ambiguity. The inverse is not nearly as difficult to support.
Much like how a property in an API initially defined as being optional cannot be made mandatory without potentially breaking clients, whereas making a mandatory property optional can be backward compatible. IOW, the cardinality of "0 .. 1" is a strict superset of "1".
> The problem with trying to make "English -> formal language -> (anything else)" work is that informality is, by definition, not a formal specification and therefore subject to ambiguity. The inverse is not nearly as difficult to support.
Both directions are difficult and important. How do you determine when going from formal to informal that you got the right informal statement? If you can judge that, then you can also judge if a formal statement properly represents an informal one, or if there is a problem somewhere. If you detect a discrepancy, tell the user that their English is ambiguous and that they should be more specific.
It's an interesting presentation, no doubt. The analogies eventually fail as analogies usually do.
A recurring theme presented, however, is that LLM's are somehow not controlled by the corporations which expose them as a service. The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.
Also, the OS analogy doesn't make sense to me. Perhaps this is because I do not subscribe to LLM's having reasoning capabilities nor able to reliably provide services an OS-like system can be shown to provide.
A minor critique regarding the analogy equating LLM's to mainframes:
Mainframes in the 1960's never "ran in the cloud" as it did
not exist. They still do not "run in the cloud" unless one
includes simulators.
Terminals in the 1960's - 1980's did not use networks. They
used dedicated serial cables or dial-up modems to connect
either directly or through stat-mux concentrators.
"Compute" was not "batched over users." Mainframes either
had jobs submitted and ran via operators (indirect execution)
or supported multi-user time slicing (such as found in Unix).
> The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.
I don't think that's what he said, he was identifying the first customers and uses.
>> A recurring theme presented, however, is that LLM's are somehow not controlled by the corporations which expose them as a service. The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.
> I don't think that's what he said, he was identifying the first customers and uses.
The portion of the presentation I am referencing starts at or near 12:50[0]. Here is what was said:
I wrote about this one particular property that strikes me
as very different this time around. It's that LLM's like
flip they flip the direction of technology diffusion that
is usually present in technology.
So for example with electricity, cryptography, computing,
flight, internet, GPS, lots of new transformative that have
not been around.
Typically it is the government and corporations that are
the first users because it's new expensive etc. and it only
later diffuses to consumer. But I feel like LLM's are kind
of like flipped around.
So maybe with early computers it was all about ballistics
and military use, but with LLM's it's all about how do you
boil an egg or something like that. This is certainly like
a lot of my use. And so it's really fascinating to me that
we have a new magical computer it's like helping me boil an
egg.
It's not helping the government do something really crazy
like some military ballistics or some special technology.
Note the identification of historic government interest in computing along with a flippant "regular person" scenario in the context of "technology diffusion."
You are right in that the presenter identified "first customers", but this is mentioned in passing when viewed in context. Perhaps I should not have characterized this as "a recurring theme." Instead, a better categorization might be:
The presenter minimized the control corporations have by
keeping focus on governmental topics and trivial customer
use-cases.
Yeah that's explicitly about first customers and first uses, not about who controls it.
I don't see how it minimizes the control corporations have to note this. Especially since he's quite clear about how everything is currently centralized / time share model, and obviously hopeful we can enter an era that's more analogous to the PC era, even explicitly telling the audience maybe some of
them will work on making that happen.
Just for fun, I wondered how small a canonical hello world program could be in macOS running an ARM processor. Below is based on what I found here[0] with minor command-line switch alterations to account for a newer OS version.
ARM64 assembly program (hw.s):
//
// Assembler program to print "Hello World!"
// to stdout.
//
// X0-X2 - parameters to linux function services
// X16 - linux function number
//
.global _start // Provide program starting address to linker
.align 2
// Setup the parameters to print hello world
// and then call Linux to do it.
_start: mov X0, #1 // 1 = StdOut
adr X1, helloworld // string to print
mov X2, #13 // length of our string
mov X16, #4 // MacOS write system call
svc 0 // Call linux to output the string
// Setup the parameters to exit the program
// and then call Linux to do it.
mov X0, #0 // Use 0 return code
mov X16, #1 // Service command code 1 terminates this program
svc 0 // Call MacOS to terminate the program
helloworld: .ascii "Hello World!\n"
> A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values.
This is an anti-pattern which results in voluminous log "noise" when the system operates as expected. To the degree that I have personally seen gigabytes per day produced by employing it. It also can litter the solution with transient concerns once thought important and are no longer relevant.
If detailed method invocation history is a requirement, consider using the Writer Monad[0] and only emitting log entries when either an error is detected or in an "unconditionally emit trace logs" environment (such as local unit/integration tests).
It's absolutely not an anti-pattern if you have appropriate tools to handle different levels of logging, and especially not if you can filter debug output by area. You touch on this, but it's a bit strange to me that the default case is assumed to be "all logs all the time".
I usually roll my own wrapper around an existing logging package, but https://www.npmjs.com/package/debug is a good example of what life can be like if you're using JS. Want to debug your rate limiter? Write `DEBUG=app:middleware:rate-limiter npm start` and off you go.
> It's absolutely not an anti-pattern if you have appropriate tools to handle different levels of logging, and especially not if you can filter debug output by area.
It is an anti-pattern due to what was originally espoused:
I add logging code, that stays forever. For a major
interface, I'll usually start with INFO level debugging, to
document function entry/exit, with param values.
There is no value for logging "function entry/exit, with param values" when all collaborations succeed and the system operates as intended. Note that service request/response logging is its own concern and is out of scope for this discussion.
Also, you did not address the non-trivial cost implications of voluminous log output.
> You touch on this, but it's a bit strange to me that the default case is assumed to be "all logs all the time".
Regarding the above, production-ready logging libraries such as Logback[0], log4net[1], log4cpp[2], et al, allow for run-time configuration to determine what "areas" will have their entries emitted. So "all logs all the time" is a non sequitur in this context.
What is relevant is the technique identified of emitting execution context when it matters and not when it doesn't. As to your `npm` example, I believe this falls under the scenario I explicitly identified thusly:
... an "unconditionally emit trace logs" environment
(such as local unit/integration tests).
> There is no value for logging "function entry/exit, with param values" when all collaborations succeed and the system operates as intended.
Well, I agree completely, but those conditions are a tall order. The whole point of debugging (by whatever means you prefer) is for those situations in which things don't succeed or operate as intended. If I have a failure, and suspect a major subsystem, I sure do want to see all calls and param values leading up to a failure.
In addition to this point, you have constructed a strawman in which logging is on all the time. Have you ever looked at syslog? On my desktop Linux system, output there counts as voluminous. It isn't so much space, or so CPU-intensive that I would consider disabling syslog output (even if I could).
The large distributed system I worked on would produce a few GB per day, and the logs were rotated. A complete non-issue. And for the rare times that something did fail, we could turn up logging with precision and get useful information.
I understand that you explained some exceptions to the rule, but I disagree with two things: the assumption of incompetence on the part of geophile to not make logging conditional in some way, and adding the label of "anti-pattern" to something that's evidently got so much nuance to it.
> the non-trivial cost implications of voluminous log output
If log output is conditional at compile time there are no non-trivial cost implications, and even at runtime the costs are often trivial.
You are very attached to this "voluminous" point. What do you mean by it?
As I said, responding to another comment of yours, a distributed system I worked on produced a few GB a day. The logs were rotated daily. They were never transmitted anywhere, during normal operation. When things go wrong, sure, we look at them, and generate even more logging. But that was rare. I cannot stress enough how much of a non-issue log volume was in practice.
So I ask you to quantify: What counts (to you) as voluminous, as in daily log file sizes, and how many times they are sent over the network?
As I said, conditional. As in, you add logging to your code but you either remove it at compile time or you check your config at run time. By definition, work you don't do is not done.
Conditionals aren't free either, and conditionals - especially compile-time - on logging code are considered by some a bugprone anti-pattern as well.
The code that computes data for and assembles your log message may end up executing logic that affects the system elsewhere. If you put that code under conditional, your program will behave differently depending on the logging configuration; if you put it outside, you end up wasting potentially substantial amount of work building log messages that never get used.
This is getting a bit far into the weeds, but I've found that debug output which is disabled by default in all environments is quite safe. I agree that it would be a problem to leave it turned on in development, testing, or staging environments.
The whole concept of an “anti-pattern” is a discussion ender. It’s basically a signal that one party isn’t willing to consider the specific advantages and disadvantages of a particular approach in a given context.
I know a lot of people do that in all kinds of software (especially enterprise), still, I can't help but notice this is getting close to Greenspunning[0] territory.
What you describe is leaving around hand-rolled instrumentation code that conditionally executes expensive reporting actions, which you can toggle on demand between executions. Thing is, this is already all done automatically for you[1] - all you need is the right build flag to prevent optimizing away information about function boundaries, and then you can easily add and remove such instrumentation code on the fly with a debugger.
I mean, tracing function entry and exit with params is pretty much the main task of a debugger. In some way, it's silly that we end up duplicating this by hand in our own projects. But it goes beyond that; a lot of logging and tracing I see is basically hand-rolling an ad hoc, informally-specified, bug-ridden, slow implementation of 5% of GDB.
Why not accept you need instrumentation in production too, and run everything in a lightweight, non-interactive debugging session? It's literally the same thing, just done better, and couple layers of abstraction below your own code, so it's more efficient too.
I agree that logging all functions is reinventing the wheel.
I think there's still value in adding toggleable debug output to major interfaces. It tells you exactly what and where the important events are happening, so that you don't need to work out where to stick your breakpoints.
I don't quite like littering the code with logs, but I understand there's a value to it.
The problem is that if you only log problems or "important" things, then you have a selection bias in the log and don't have a reference of how the log looks like when the system operates normally.
This is useful when you encounter unknown problem and need to find unusual stuff in the logs. This unusual stuff is not always an error state, it might be some aggregate problem (something is called too many times, something is happening in problematic order, etc.)
I don't know what's "important" at the beginning. In my work, logging grows as I work on the system. More logging in more complex or fragile parts. Sometimes I remove logging where it provides no value.
Before SQL became an industry standard, many programs which required a persistent store used things like ISAM[0], VISAM (a variant of ISAM[0]), or proprietary B-Tree libraries.
None of these had "semantic expressivity" as their strength.
> If SQL looked like, say, C#'s LINQ method syntax, would it really be harder to use?
It used to be widely seen as such. See for example Stallmanns latest post where he mentions that. Coder was not the same as programmer, it was the lesser half of the job. Nowadays the term has lost its original meaning.
> Right now, I'm casually looking for other jobs, not committed to leaving, but seeing what my options are. It's a shame because I like my manager, and I like my colleagues, if only the CEO would just stay out of the day to day stuff.
If you find yourself having a viable employment alternative, perhaps having a conversation with your CEO addressing his micromanagement could be had? Of course, I recommend not sharing your employment options with coworkers (including the CEO) as this might complicate matters.
This is possibly the most dysfunctional, toxic, nihilistic, and Machiavellian advice regarding the topic of "How to Deal with a Bad Manager?", or any related topic therein, I have ever encountered.
I do not author this lightly and hope by doing so it provides an impetus for introspection.
Do with this intervention what you wish.
EDIT:
I do not know you nor your situation, work or otherwise.
What I do know is pain.
The post to which I responded has many pain indicators, or at least that is what I see when reading it. Perhaps I am just projecting though.
In the event I am not experiencing a form of projection, here is what I would like to share:
1) Holding onto pain only embeds it further
into one's life.
2) "Retribution" does not alleviate one's
own pain, but instead increases it.
3) Codependency can be toxic and addictive.
4) Most importantly: happiness is a choice.
The last one can be both the most obvious and most difficult to internalize.
Thank you for sharing your thoughts. I have the full understanding of your perception on this. And judging by the number of downvotes I got, your sentiment seems to be shared among others.
My advice is very different from “run”, assuming running is not an option. I want you to imagine a person who has never had too many opportunities in their life, and works in a downtrodden environment. And does not have the qualities to work somewhere better. Now this same person has wife and kids. The only thoughts that come from this is hopelessness.
In this hopelessness, minute incremental improvements is a huge world of difference. Communication, even snarkiness, will shake things. Praise will keep situations cool. Providing Safety will keep your opportunities open. I fundamentally believe that these three things at least will allow for improvements in the person.
As for your points, suffering through pain (coming from a superior or subordinate, or the environment) will make it worse, either hardening the heart or tearing it down. That is where mastering the psychology of the oppressor comes in. Knowing who you work with helps to cope with the pressure. Think of it as knowing the bully so you won’t get bullied. Retribution is not the intention, just communication. Communicate the full problem no matter how difficult it may be. And if the problem is with the person themselves, make it humorous. Codependency makes two people similar in nature. This would be the opposite of improvement.
On reflecting on my comment, I think it was a bit rushed and did not go fully in depth. It sounds very difficult to grasp. I apologize to everyone. If I could delete it I would.
> Thank you for sharing your thoughts. I have the full understanding of your perception on this. And judging by the number of downvotes I got, your sentiment seems to be shared among others.
IMHO, don't worry about the downvotes, but instead take joy in our opportunity to have this discussion. Again, just my $0.02 and all that. :-)
> My advice is very different from “run”, assuming running is not an option. I want you to imagine a person who has never had too many opportunities in their life, and works in a downtrodden environment. And does not have the qualities to work somewhere better. Now this same person has wife and kids. The only thoughts that come from this is hopelessness.
My mentor lived this experience for most of his career. He had a wife and child, so chose income stability over optimal work environment for reasons I am sure you can relate. It made me sad the things he had to put up with, but I understand why he did.
Ultimately, he achieved his goal of providing for his family. However, the personal cost was high.
> On reflecting on my comment, I think it was a bit rushed and did not go fully in depth. It sounds very difficult to grasp. I apologize to everyone. If I could delete it I would.
Again, IMHO, sharing your thoughts/situation honestly is the important thing, even if others (myself included) do not agree with how it was worded. If we do not get conflict/pain off of our chest, out of our system, it can build up in our psyche much like a cut can become infected.
If anyone needs to apologize, it is me for having initially replied in such an aggressive manner. I hope you accept my apology for having done so.
EDIT:
Clarified the phrasing of "do not agree with same" to be "do not agree with how it was worded".
I've been in this situation as well and there are no realistic options to improve it without leadership recognizing the manager's affect and redressing same.
> Micromanagement, lack of vision, poor communication, poor planning, zero support, full package.
This isn't going to get any better. More likely it will get worse over time as your new manager is under more stress to deliver on promises made, without the requisite planning and/or consultation with their team needed.
> About half the team share similar view. The other half seems like just playing along.
Experience suggests the latter group shares the same view as the former, but have other priorities (family, stock options, retirement, etc.) outweighing sharing them. This is not a judgement nor a bad thing. It just is.
> I still care about the mission and about what I do. Though not as much as before this all happened.
This is an inevitable transition resulting from this scenario. A cheeky phrase for this is "beating the care out of you."
> What would you do in my shoes to make the best of the situation?
Make as few waves as possible; do what you are assigned to do ethically.
Take your time to identify an ideal opportunity in another company.
Say nothing of the job search.
Only move on to another gig if you have an employment agreement in place.
> Seniors' attitudes on HN are often quick to dismiss AI assisted coding as something that can't replace the hard-earned experience and skill they've built up during their careers.
One definition of experience[0] is:
direct observation of or participation in events as a basis of knowledge
Since I assume by "AI assisted coding" you are referring to LLM-based offerings, then yes, "hard-earned experience and skill" cannot be replaced with a statistical text generator.
One might as well assert an MS-Word document template can produce a novel Shakespearean play or that a spreadsheet is an IRS auditor.
> Or maybe the whole senior/junior thing is a red herring and pure coding and tech skills are being deflated all across the board. Perhaps what is needed now is an entirely new skill set that we're only just starting to grasp.
For a repudiation of this hypothesis, see this post[1] also currently on HN.
The problem with trying to make "English -> formal language -> (anything else)" work is that informality is, by definition, not a formal specification and therefore subject to ambiguity. The inverse is not nearly as difficult to support.
Much like how a property in an API initially defined as being optional cannot be made mandatory without potentially breaking clients, whereas making a mandatory property optional can be backward compatible. IOW, the cardinality of "0 .. 1" is a strict superset of "1".
reply