I went through the MOOC material and tried it out for a few small things. It inherited a lot of the unique Smalltalk features which make it sort of alienating to a modern programmer. For instance, all your code resides in an image file, and if you want a copy of your code the environment does some extra epicycles to copy it outside. The choice to make everything a message, including basic flow control takes some getting used to. As you just sort of hack your image to do what you want, it just sort of turns into a ball of mud. The paradigm they're going for is TDD for everything. Personally, I feel this is a big step backwards from most mainstream scripting languages adding on type annotations. It's not easy to use a simple text editor. You pretty much have to use their integrated environment.
Then, there were a few problems that were specific to Pharo. Pharo went through a couple different package systems and the different package systems don't necessarily have the same packages. Pharo has had major breaking changes in their GUI toolkit, so if you found a package that did exactly what you wanted and were able to install it, it just wouldn't work.
This is kinda the point of smalltalk. It's a radically different programming model _and_ paradigm than most C-derived languages. If you're looking for a language that feels comfortable to developers with a background in [insert widely-deployed language here], there are better options for you.
Smalltalk has been around for over 40 years, which makes it a contemporary to C. Just like FORTRAN or COBOL, there's a corpus of deployed code, and institutions that are invested in a maintained runtime, but that dosen't mean that you would necessarily want to use it for a new project today.
A lot of the great things about smalltalk, such as block syntax for anonymous functions, have been copied into many modern programming languages, and we probably wouldn't have them without smalltalk taking it's unconventional approach.
> The paradigm they're going for is TDD for everything. Personally, I feel this is a big step backwards from most mainstream scripting languages adding on type annotations.
So, the push to add types to JS, Python, Ruby and other dynamic languages is largely from developers accustomed to Java, C-Sharp, and other enterprisy languages who would probably rather not work in a dynamic language at all. Put another way, it's a concession of these languages to try and be everything for everyone. But statically typed complied languages do not provide an inherently better programming paradigm than dynamic programming. Smalltalk commits further and deeper to a live, dynamic programming experience. It's different, and I don't feel like saying that it fails to conform to expectations brought in from other, very different programming paradigms is a meaningful criticism of the language.
> So, the push to add types to JS, Python, Ruby and other dynamic languages is largely from developers accustomed to Java, C-Sharp, and other enterprisy languages ...
Nope, this is not true. I am personally pushing to add types to our Python codebase, and I have no love for "Java, C-Sharp and other enterprisy languages".
It's just that as the codebase grows, and especially as number of contributors grows, people start to make more mistakes. A rarely used code path, like an error handler, might fail in production because of wrong type or missing argument. We can require unit tests with 100% coverage, but this is very hard -- and typing linter finds you many more bugs per effort spent.
That does not mean that we should always specify every type in program explicitly, like Java does. Unspecified types are great for interactive exploration, or a quick hack. But as you move to production, don't understimate typecheckers -- they can help a lot.
This is pretty much my feeling. I'm not adding types because I have some kind of brain rot that makes me need AbstractFactoryBeanContainerAnnotationFactory. I like adding type annotation because it means you can do static analysis on your code base. Without it, you need exhaustive coverage tests to demonstrate what exactly can be returned.
> That does not mean that we should always specify every type in program explicitly, like Java does. Unspecified types are great for interactive exploration, or a quick hack.
It's also helpful to consider whether they constitute unnecessary requirements[1]. Most mainstream JS code is rife with problems like this—including rampant mis-/over-use of triple equals. (I call this "going out of your way to do the wrong thing".)
In what situation do you want the whole spectrum of `==` behavior in JS, other than possibly the `undefined == null` case? I’d argue exercising any of the other type conversion cases like `1 == true` or implicit `valueOf/toString` makes the code much harder to understand.
To ask the question is to fundamentally misunderstand the context. Please do check out the link. It's not a matter of wanting "the whole spectrum" of double equals (and then justifying that). You need to justify the requirements you're imposing. The article I linked gives an excellent example of why unnecessary requirements should be avoided.
Aside from that, overuse of triple equals is almost always a code smell that indicates the contributor is coming from a place of following cargo cult advice instead of solid understanding. (And in the case of the cargo cult advice about triple equals, it's not even very logical advice. I.e. it's not just the people following the advice who aren't thinking it through—the people dispensing it tend to be engaged in shallow thinking, too.) For example, anyone who asks you in a code review to replace `typeof(x) == y` with 'typeof(x) === y` has no solid reason whatsoever for that change (and will never have any; there is no argument that will hold up under scrutiny).
> But statically typed complied languages do not provide an inherently better programming paradigm than dynamic programming.
For any large long lived project with multiple contributors, statically type analysis definitely adds value by eliminating an entire class of errors at compile time.
> the push to add types to JS, Python, Ruby and other dynamic languages is largely from developers accustomed to Java, C-Sharp, and other enterprisy languages who would probably rather not work in a dynamic language at all
I recall Guido van Rossum stating once, that he got convinced of the necessity for type annotations by JetBrains explaining to him how hard it was to provide good code completion. Not sure its the full answer, but back then I found it interesting as an example how lobbying can work.
(I feel rather indifferent on the type annotations for Python actually, I can see their usefulness, but also the shortcomings of retroactively introducing such a system into a dynamically typed language).
Typically in Ruby (which is heavily Smalltalk-influenced) we do ad-hoc type annotations anyway and call it documentation.
So you've got the camp that favors type annotations for various reasons and the camp that is opposed. There is also a third camp: the DBC (design by contract) camp. Their argument is that type annotations don't go far enough and that's what you really need is to enforce preconditions and postconditions. While I see their point, I think DBC lacks a "killer app" -- which as you pointed out, for type annotations, is static analysis (which leads to tools like code completion, refactoring browsers, performance improvements, and more).
> I recall Guido van Rossum stating once, that he got convinced of the necessity for type annotations by JetBrains explaining to him how hard it was to provide good code completion.
What a weird reason to decide type annotations are useful, but I suppose I am not surprised. And is that really even true? Code completion works fine with Elixir and ElixirLS in VS Code and is seemingly independent of whether typespecs are present or not.
Type annotations are there to help the user in figuring out what the parameters have to behave like, and, perhaps more importantly, for linters to help finding issues caused by mismatching the types.
Invoking the .detonate() method on a pyrotechnic bolt is, usually, far more benign than invoking it on an instance of Warhead. You really want to make sure you don't pass a Warhead to the function you wanted to get only bolts. A linter will warn you about such possible mishaps.
I understand how type annotations are useful for users and also for static analysis tools (called Dialyzer in Elixir/Erlang), which is exactly why I didn't see why code completion would be one's primary reason for wanting them. However, in Python's case with heavy use of OOP and dot notation, I suppose I could see it being beneficial if the tools pickup up type annotations for code completion (still not the primary reason though) as opposed to something like Elixir. In Elixir, types and modules are tightly coupled, which means user's being implicitly being aware of types and also code completion generally working without type annotations (which can be inferred by Dialyzer anyway).
So your comment actually backs mine up. I don't know why code completion would be the tipping point for type annotations when their usefulness to users and static analysis tools are already enough. Documentation for users is in fact enough to have them.
It could have been that code completion was the feature Guido saw would seal the deal with users. Good code completion is a great tool for exploratory coding. I frequently found about new APIs through the IDE before reading the documentation. This has been extremely valuable since Visual Basic 3, IIRC.
I like it - sometimes I want something to really take an integer (or an explosive bolt) and not any number (or a warhead). If I can say that the argument is supposed to be something, then it's harder to the IDE user to misuse it with unpredictable results.
Or, if my library or function can do something I never expected it could, I welcome the patch to make it obvious.
> a copy of your code the environment does some extra epicycles to copy it outside
Iceberg https://github.com/pharo-vcs/iceberg is the Git/etc. integration built into Pharo and works extremely well. You don't need to "file out" code if that's what you meant.
But, you wouldn't want to store the .sources or even .changes into git would you? The git-way is to have many small(ish) files so that you can merge them easily to another development branch. Having just 2 files in git doesn't seem to make sense.
I don't know how Pharo does it but I could see that every class had its own source-file, committed to git ?
> Don't think of it as trading source files.
> Think of it as version controlling a package of class definitions.
It is very useful to know whether I should load version 1 or version 2, or 3 of a (whole) package if somebody tells me they have a new version of their great package.
But main thing about git (and distributed version control) is not "trading source files". It is making changes by a distributed team to the same code-base and then being able to merge the changes made by multiple developers and having tools that help resolve conflicts between the different versions.
You can and often have multiple git-repositories. So if multiple developers were working on a Smalltalk package it might make sense to create a git-repo for just that package. But if the only file is the package-file then "merging" doesn't make much sense because you can't combine files changed by one author with files changed by the other authors.
So while I want to be able to 'control package versions" I also want to be able to "merge branches". And to support that it seems better to have many small files instead of a single big package file.
I agree that Smalltalk should be able to do better than a typical git-application because it knows about the semantics of classes and methods. And I think Smalltalk "change-sets" are a great concept. So I'm just interested in how Pharo tries to solve the tension between big package files versus multiple developers working on small code-units at the same time.
Right. But Change-sets and Change-set browser are still a great concept. I wish other programming environments would support them. You see exactly the set of changed objects you have changed or created, and can save them to a file and share that with others.
One reason they work so well is that Smalltalk treats methods as objects, not just as functions defined in some file. So a change-set knows it contains specific versions of specific methods. That is not possible in a typical git-environment where the most we know about a change is that it consists of some specific lines changed in a specific file.
> One reason they work so well is that Smalltalk treats methods as objects, not just as functions defined in some file. So a change-set knows it contains specific versions of specific methods. That is not possible in a typical git-environment where the most we know about a change is that it consists of some specific lines changed in a specific file.
People just coming to this might not realize that the Opensmalltalk-derived environments (Squeak, Pharo, etc) already had their own VCS system called Monticello, which worked pretty well once you understood the per-method diffs and how it was suited specifically to Smalltalk. The move to git -- Iceberg in Pharo and less so Squot for Squeak -- has left some of this behind in order to appeal to a broader base of orthodox programmers who might be otherwise scared off. It's kind of a shame.
Interesting perspective. Seems that I don't appreciate ChangeSet.
I became accustomed to saving method changes and seeing a timestamped editions of that method; adding a version tag to what seemed like useful method editions; resolving any conflicts, then releasing a versioned method.
I kind-of liked the early visibility that gave of what other people were doing.
>You have class definitions, and these are only version controlled in text files as a concession to the extreme populatity of GitHub.
What prevents the pharo guys from simply providing their own merge tool and letting git call it?
The only remaining issue then is showing a file system tree and diffs out of the image e.g. a simple export that doesn't necessarily even have to be importable.
You are missing some background. First, Pharo uses libgit2 and provides its own merging mechanism to this library. It understands the code much better than plain Git, so code merging usually has fewer conflicts.
Second, for Pharo, the code (classes, methods...) are living objects. These objects are the working copy Pharo deals with. Not with the files. So it mostly works only with this object working copy + .git database. Another working copy of files (which other developers would use for editing, files outside of the .git directory) is just an additional source of problems and conflicts.
Pharo has a nice Git integration, and they spent a lot of time on making it work really well. So it has functions like browsing the history of one particular method etc. You are discussing already solved problems.
> it has functions like browsing the history of one particular method etc.
That is great functionality I think. Much better than basic git where changes are just changes to a set of lines. The unit of software should be classes and methods, not files.
The reason git can't do this is that it is language neutral.
However what I find interesting is that there is a corollary between git commits and Smalltalk snapshots. It would be great if we could "merge" Smalltalk images by checking out specific git commits and merging them together. The result would then be all the source-code in your current Smalltalk image.
It is difficult if not impossible to merge two Smalltalk images. Yet it would be useful to know what are the conflicts if any between two images. Maybe Pharo already provides something in that direction?
> … if we could "merge" Smalltalk images … merge two Smalltalk images … what are the conflicts …
Merge changes, not images.
Even back in 1983, "Smalltalk-80: The Interactive Programming Environment"
"check conflicts
A prompter appears in which you type the name of a changes file. When you choose the yellow button command accept, an analysis is done of all the items in the change management browser menu to see if any of them refer to the same message selector, but specify different definitions."
As a very long time occasional Smalltalk user (on my 1108 Lisp Machine, Commercial on early Mac, later Squeak, then Pharo), I agree with you, except that Pharo has nice git integration that I can recommend spending the time to set up.
Also, I used to create Squeak headless standalone applications - plenty of tutorial material for this.
Lisp used to get a similar bad rap, but SBCL has good app packaging available and the commercial LispWorks makes it easy to build small standalone apps.
> As you just sort of hack your image to do what you want, it just sort of turns into a ball of mud.
No, it doesn't just turn into a ball of mud all-by-itself.
As you said, we can export our app code and bake a new working image (vendor image + our app code — kind-of like a container, kind-of like a reproducible development process.)
Pharo and Smalltalk in general always remind me of that famous William Gibson quote: "The future is already here—It’s just not very evenly distributed".
If any Pharo folks are reading this, here's a small website feature request: Let me click on the images of this page to get full-resolution, un-cropped versions.
Pharo follows from Smalltalk, particularly through Squeak. It started off as a fork of Squeak and retains Smalltalk's syntax, but has gone its own way in terms of the capabilities it provides and rewriting portions.
I really appreciate that a blog post about a language update starts with a simple sentence to describe what the language actually is.
> Pharo is a pure object-oriented programming language and a powerful environment, focused on simplicity and immediate feedback.
Many times I land on announcements about Foo v3.0 that is super awesome and describes the diff from v2.5, with no mention whatsoever about what Foo actually is.
Congratulations to all the Pharo folks. It’s good to see a number of names in the contributors list that I still recognize. I really impressed that Pharo has maintained ground this long.
This coming June, it will be 10 years since I jumped out of the Smalltalk balloon, and dedicated myself to embracing polyglotism, right tool for right job, pragmatism, etc.
I miss the community. I miss many of the elegant aspects of Smalltalk. Maybe, since it’s release 10, and I’ve journeyed elsewhere for 10 years, I should give Pharo a spin. Any pointers for ye olde VisualWorks guru to get up to speed again?
Who's using Pharo in production? Every time Smalltalk comes up, it's almost like it's this Loch Ness monster that everyone claims to be enamored with but doesn't actually exist. I want to love Pharo so much, I just can't think of a single thing it would be useful for.
Why does knowing who else is using something have any impact on your ability to love it? Sounds like it's just a motivation problem perhaps?
For example, I make use of Plan 9 daily, and I don't really give a whip if anyone else thinks it's worth anything at all.
I also use Emacs lisp in pretty strange ways (automating workflows by cross-querying REST services in ways I used to do by hand). I've even demonstrated it for coworkers, and I think some of them may be using the same techniques, but I'm not sure, and I don't care. If they have questions about it, I'll answer them. If not, they can say "that guy's weird", and I'm totally fine with that.
Outnerd the nerds I say! Let your freak flag fly!
In all seriousness I encourage people to write some code. Play around! Share your experiences. It's good for you and everyone else to foster new ideas and be innovative. It's especially good if the technology in question is surrounded by a welcoming community. It provides an additional sense of belonging to something, and who knows you might just actually enjoy yourself!
You've got the first spark of curiosity, now you've just got to stoke the flames, or not!
>Why does knowing who else is using something have any impact on your ability to love it? Sounds like it's just a motivation problem perhaps?
Not making a value judgement. I agree that it doesn't need any more reason to exist than for its own sake. But I'm just genuinely curious what advantages Pharo has that make it useful in a professional setting.
> what advantages Pharo has that make it useful in a professional setting.
Pretty much, the ultimate quick-and-dirty prototyping engine, especially for GUI projects. In the sense that you're prototyping largely from scratch, not cobbling myriads of huge libs. So, ball of mud. Not brick wall.
Funny thing. Back-in-the-day Smalltalks were attractive because the source code was included to be studied and re-used (When that wasn't true of many other development tools).
Yes, in the Pharo world there are coredumps, logs, serialization of the context of the exception (so you can open a debugger later from another host and see the messages walkback and inspect the values of all objects and its instvars on any step of that wallback), also I've made a RESTful REPL [1] to interact live with headless images and I'm cough secretly working in websocket based IDE.
For the rest I can tell you that:
- The CI doens't have anything special in it, just build and delivers, after many categories of tests, a docker image with the app ready for production.
- Iceberg for a shared repo. Devs usually use one fresh image per new branch to work on. Flow is reasonably approximated to git-flow.
- The app is generally architected as stateless as it can be so it can enjoy undefined horizontal scaling capacity.
Thanks for the reference I didn't knew about that one!
What I'm working on is based on Pharo tho. It's an alternative IDE and accidentally a way to create apps that have multi-platform native look and feel.
Pharo runs in the backend for resolving roaming for SS7, GTP and DIAMETER.
It orchestrates a lot in there in order to provide Network Virtualization [1]
Hacker News has become mainstream. Which means leet code grind with Java Python or JavaScript. The hackers are still here. But they are just a minority.
SmallTalk as a language is IMVHO terrible BUT it's strength came from it's concept, an user-programmable environment, that's matter so much.
SmallTalk was the language of first commercial desktop environments, modern desktops with keyboard, mouse, a similar form factor than today desktop, networking etc and those historical systems are still far more advanced than today's ones.
Personally I prefer Lisp as a base language, but in any case the concept is far more important than the rest. A thing humanity lost years ago and that need to recover ASAP.
I find preferences-wars a source of evil. Lisp is fine, not the subject here, tho.
Back to the subject, Smalltalk's syntax allows the most elegant expression of computer code similar to natural written language I've seen. And that at the lowest cognitive load to learn and read that the computer world provided so far.
I'm with you about raising the importance on concepts and that humanity needs to rescue its capacity for making that relevant again. I'm afraid Post-modernism is liquifying intelligence and is making everything regarding to intelligence to be harder to flourish.
PS:
We use a lot of camel case but please remember it's Smalltalk (instead of SmallTalk).
It has nothing to do with philosophical concepts and everything with business culture.
There are tools that facilitate expression, innovation and individual contribution. And there are tools that make it easier to scale and for programmers to be interchangeable.
These two are very hard to unify. There’s cross pollination but that’s about it.
Whether you’re in a top down hierarchical business model or in one that’s more open and people centric has a significant impact on which type of tools you’re using.
Smalltalk, Lisp and their descendants belong to the former category. They are immersive, expressive and incredibly dynamic and their communities are often pioneers. But this type of freedom comes at a cost that some do not want to pay.
> But this type of freedom comes at a cost that some do not want to pay.
Who are those some? Managers or techie? Because yes, business choose to do it's best avoiding Smalltalk, Lisp etc for business reasons, but those reasons are not "their cost" but the fact that with them anything is open, integrated, there are not countless of individual "product" to be sold, well welded so to guarantee that no one can temper them etc.
It's a bit like modern "open source enterprise" monsters, they are not monsters because doing so make easy changes people involved in a project, find people skilled in that field etc but because being monsters it's cheap to pay enterprise support instead of deal with issues internally, re-write from scratch instead of forking etc.
While these days Smalltalk and Lisp are just programming environments on a host OS in the past they were the OS and so the OS is a unique environment not a container cargo ship with containers on top, anything rigidly separated, that's the business reasons behind industry choice IMO...
> We use a lot of camel case but please remember it's Smalltalk (instead of SmallTalk).
Thanks, correction registered :-)
I agree about preferences-war, I generally tell my preference not to start flame but to give something for a positive discussion, witch means "how many in a community prefer $this more than $that and why" but less about the "similarity" with a natural language: in nature we use words as tags, so "objects" are natural, message passing is "natural" in the sense of collaboration between humans, but not really in the OOP sense. Also Smalltalk syntax to me does not look much similar to natural language.
The most similar language I found so far is Python, imperative programming is not so effective but is easier than functional for newcomers and it's syntax is very easy to read, IMVHO that's the reason of it's popularity especially compared with very similar (in targets) languages like Perl or Ruby.
Lisp is easy once you have learned a bit, and probably that's one part of why is not much popular, the rest, in common with Smalltalk is that both are designed for user programming, while "modern industry" want to lock users depriving them of any possible freedom and free usage of a desktop. In the past that was the IBM doctrine, thereafter the GAFAM doctrine: they all born out of Xerox tech, small part of it, wrapped in ways that put a product, a service at the center instead of the human. It's not only about liquefying intelligence but a mean to make hard doing certain things.
A stupid example: we have Wikidata, SPARQL is not that a great and digestible language but it's there, why the hell it's not normal just for a middle-school simple research query Wikidata form "a buffer" (in the Emacs sense of that) and see results in various forms we can manipulate to plot graphs etc at a level easy enough for a middle-school small student research? That's because IMVHO making easy doing so means users have power in their own hand, they can move not only in pre-defined and controllable paths but independently.
Why aggregators vs personal aggregators with RSS+Xerox style scoring? Again because aggregators might satisfy users demand BUT matching them to their hoster desire, for instance silencing or discrediting some news and enlarging others (try to see the recent polemics about Meta vs TikTok PR campaign) while a personal one can't be tweaked from remote so users might find their own path with their own ideas and tools.
Why "internet banking website" instead of a common standard API (like EU OpenBank, but open for all) with a desktop client? With it, users can have their transactions digitally-signed on their own desktop, having multiple banks and banking services in the same place, with the same UI etc without we have surveillance on crappy banking websites, crappy authentication, nothing in our hands etc.
All those things have a common ground: modern/classic software development is compartmentalized with any human layer religiously separated from the others, ancient programming languages are designed for a coherent unique environment without barriers.
Allow me to say that Smalltalk syntax can be really really similar to plain english sentences if you intentionally refactor it to optimize an elegant API. But (hairy topic) not every smalltalker will have talent for writing or taste or care at all about that, or at least enough to develop in that direction (which accidentally produces software friendlier to maintain and extend).
It doesn't have to try to be natural language, AI/ML is close to achieve mimicing it properly [1], it just needs to feel comfortable enough for the programmer to tell computers how to behave in a way that has less cognitive load.
The other questions you raise I think are very real and way more related to how power works among homo sapiens sapiens than computing languages. The software becomes an optimization of the power relationships created among individuals, groups and institutions/legalized-mafia-groups.
Hum, I can't really visualize how expert systems can "free us" from the need of programming and in general how a UI can "free us" from the need of "small talks between human and computer"...
Oh sure, it's appealing speaking to my desktop something like "hey desktop, display a 3D animation of the world by population density, annual climate, natural resources, kind of society etc to help me see a good place to emigrate to according to my preferred criteria you already know", but we are decades behind such UIs and their real effectiveness is still much unknown even in philosophical terms because such UIs will definitive be non deterministic so it's a bit hard trust them in many cases, possibility of mere verification is next to impossible etc.
It's not needed in many domains of course, for instance if I want a raw classification of scanned documents and most of them are well classified humans gets benefit from the raw ML classification, but not equally nice for a car's ADAS system that crush me against a wall because of an interpretation error.
In Smalltalk terms: yes some listings can look like plain English but many others are not and that's not a real issue per se, the real issue is that original Smalltalk was imagined for a human-computer UI, moderns one are kind of lost programming environment, for instance how can I compose a documents with relevant graphs from data sources in various ways etc in Pharo? Because that should be the target of a human-computer UI... For such purpose Smalltalk is far from being easy to read and use...
Also, yes, I mean: most computer works are about information manipulation, retrieving, filtering, composing information. Such information might be text, images, videos, audios, "databases" etc and to manipulate, filter and compose we need something to do so as we wish.
Still today there are no such "comprehensive tool" for anything like that, our modern most advanced tools are Notebook UIs like Jupyter or experiments like Wolfram Alpha. Emacs with org-mode is far more flexible than them and actively developed by still limited in graphics and usage terms. Xerox workstations of course are from a different era, and much less computing power but still they have had something, and they dream to evolve in that way.
Modern Smalltalks seems to have forgotten this part, much focused on "the programming side", modern systems can't even reach something like old ones since they are "individual products" with very limited IPCs...
Funny you mention that as lately I was fantasizing of a good, friendly and powerful terminal based[1] IDE for Smalltalk that you should be able to use to connect to running images.
Curious, why you look for terminal-based environment? I understand that for CLI we have efficient FLOSS network connectivity (ssh) while for GUIs even if some good enough tools exists, some FLOSS (Apache Guacamole) they are pretty limited and complex to support everywhere but, said by one who "born" on unix, CLIs are nice and useful for many things, TUIs not much for reading and writing texts and graphics (framebuffer)... They tend IMO to be just good enough in certain cases, but not as a daily driver.
Eh, it's rare that you find a language simple enough that its syntax fits on an index card, yet conceptually rich enough that you can build complex systems with it. This is a feat in and of itself that warrants admiration and examination.
r5rs was almost this, but it lacked any real facility for creating ADTs, like a class system, instead just giving you the rudiments (closures, functions), and expecting you to do the rest. Racket fixes this, but it's also much more complicated than both r5rs and Smalltalk.
I agree on the Lisp part, textual syntax is nice for human understanding but will one day be no longer necessary. If only they had made a cheap Lisp machine!
> textual syntax is nice for human understanding but will one day be no longer necessary
Can you elaborate your vision? What future you imaging in human-computer UI terms? Visual programming so far have well proved to be a failure, modern visual programming is even worse than past tentative (see how crappy are visual environment from RPA to NodeRed&c)... Vocal interfaces are also crappy... Even if we look at sci-fi movies the computer interfaces they show are ridiculously inefficient... Just see Star Trek as an example: vocal commands in emergency situations that demands multiple seconds while a direct hit on keys demands just few ms, the Borg with neural connections and visual+touch UIs in their ships etc. Oh that's sci-fi of course but that means that even artists have so far failed to imaging different means.
> If only they had made a cheap Lisp machine!
At that time was not possible in scale and industrial terms, similar to the far older Xerox workstations, it's possible now, but too many do not even know that original desktop model and MANY (very skilled PhDs included) have even issues imagining it... Just myself using Emacs/EXWM in a presentation I generally create an extraordinary WOW effect on techies who fail to understand what they see live on the screen/projector, how is that possible, and there is nothing extraordinary, just EXWM/org-mode normal usage. That's the real issue, in common with Smalltalk: people who can understand do not know, people who can push classic model again do not want it at all, it hurt they skyrocketing high businesses (and curiously old business were in relative terms more profitable for anyone, only they do not assure by any means the possibility do dominate the market since being knowledge-based anyone who happen to have developed something good can succeed), the large mass of the rest of humanity do not even know what happen yesterday so...
Much of the power of Lisp lies in its homoiconicity, but it also means the syntax is "just a bunch of parens" - a ball of mud is hard to parse. Structured editing negates this by changing the view layer from text to something richer and more specified to the form, but still in the same basic shape as the code, which is harder to do with C-like languages. Most of what's needed is a fully integrated development environment such that the language and editor are created in unison, like many Lisp efforts of old.
Hum, personally as an Emacs-er and not a developer Lisp for me is just a nice user-programming tool, a way that allow me to insert and run code for many things I need inside notes, running it on click, on a keybind, on file save/open etc that's it's power.
With C-like languages I can't do the same, they need compilation, they can't live in a live REPL, even a hello world is many SLoC etc. Unix (CLI) offer shell scripting for small automation, lost it with GUIs, while classical systems (Smalltalk and Lisp based) can perfectly and effectively.
If those language for crafting complete OSes are complex and slow I do not care much because the outcome of something like http://augmentingcognition.com/assets/Kay1977.pdf is so powerful that's absolutely worth the effort and seen actual systems that essentially fails to innovate at least since the '80s...
Rather misleading - and seemingly insulting to the people who created and use Smalltalk and Squeak - that it doesn't say "Smalltalk" (not to mention Squeak) anywhere on the front page.
They should really give credit where credit is due; growing the Smalltalk community seems like a better idea rather than pretending it's something else and implying that Smalltalk is somehow something bad that they need to hide.
I've been using Pharo professionally for about 1.5 months now, and I'm beginning to getting the hang of the language, and the culture surrounding it.
I'll share a couple of thoughts.
__General Development/Hacking__
0. From Pharo 9 and onwards, it runs smooth on an M1 Mac.
1. If you're interested in developing in Pharo professionally, we're hiring [1]!
1b. Other ways to learn about Pharo is to come to Smalltalk/Pharo conferences. I've been to the Pharo Days in Lille recently and I've learned a lot in those 3 days! Other than the Pharo Days there is ESUG.
1c. Check out the Discord channel. People have been really helpful [0].
2. A big part of the documentation can be found inside the image. For example, press SHIFT + enter and type "tutorial" or "exercise" and a few will come up. Moreover, when you click on any class, there's a "class comment" tab. Some class comments actually have comments and they're written out in a quite understandable fashion when they do.
3. You can make the following clicks and they do something different:
- Click: shows a context menu to showcase a particular window / tool
- Right click: shows "World Contents", aka objects about your GUI (I haven't used this one yet)
- Shift + Option + click: this shows a "halo" around a GUI
- Shift + Control + Click: the coolest meta click there is, it shows all objects that are directly related to the pixel you clicked on. If you clicked on a table cell item in a GUI window, you can figure it out via this click. I've used this trick to figure out where to go in order to alter my IDE to my own taste.
4. Pharo is one of the few languages I know where you can create like a game or application for which it's possible to immediately see the source code and interact with it. The use-case this allows for is highly hackable open-source applications. For example, here's a game that allows you to design chips [2]. It's easy to see how you could alter levels yourself quite easily, just start editing the code! IMO, that's next level open-source.
5. If you want to, it's relatively easy-ish to inspect the AST and see how it's mapped to the VM. Be careful of setting breakpoints here, you'll crash your image :P
6. A Smalltalker told me that Pharo's VM enforces everything to be an object but technically not everything is an object, as far as VM implementation goes. If you're not hacking on the VM, then you can assume everything is an object.
7. Overwriting the #doesNotUnderstand: message can lead to all kinds of fun! I wrote my custom If/Elif/Else DSL. Smalltalkers will hate me but I found it awesome because it showcases how one could hack a DSL quickly. You can see about it here [3]. Here's some example code:
As you can see, I've hacked the _: to be a separator of some sorts, but what it actually is, is an argument of a message. You can do all kinds of fun stuff with this. See [8].
8. When you overwrite #doesNotUnderstand then you can inspect the message and its arguments. So whether you send Object1 a:arg1 veryImportant:arg2 message:arg3, then you can inspect those arguments. In the case above, this means you can also inspect _:arg1 _:arg2 or _:arg1 _:arg2 _:arg3 ... _;argN. In other words, you can deal with variable arguments and it doesn't matter what they're called. Because of this, it's easy to create a simple DSL, if you need another separator, then simply add one. You have a lot of characters at your disposal that are quite unique [4]. I figured that out by using by using point (2) and just looking around in the environment.
__Web Development__
9. Seaside is capable of live and dynamic updating. MOOCs won't tell you this because it requires using Seaside quite differently. In short, the pattern that I see used at my work is by having server-side rendered HTML that has designated blocks as callbacks. So when you send your server-side rendered HTML, those callback blocks will transform itself into a jQuery GET/POST request. Pharo writes the jQuery for you. We also use React, but I haven't gotten around to it how it's used, I'm fairly sure we don't use anything like Redux.
10. In terms of testing, it's relatively easy to write tests. As with Go, it's all included and you're ready to test! Also note: if you want to use Selenium tests, you can use Parasol [5], it's quite easy to use.
11. The following concepts are not explained well, so I'll do it: Seaside heavily uses what we'd call middleware in NodeJS (filters in Seaside). In NodeJS/Express we also have a request object that exists during the lifetime of a request. In Seaside this is called a dynamic variable (WADynamicVariable is the class).
__Stuff I wrote out in the open__
12. I've been working on refactoring i18n in Seaside [6]. I currently find the approach Pharo uses the nicest approach, which is something along the lines of:
'You have some string that needs translation in your web app' SeasideTranslated
When you want to export a catalog file of all the strings you want to translate, then you send exportCatalog new exportCatalog and it will look through the whole image and find every tagged string and export it into a catalog (.pot) file that you can edit with POEdit (a free Mac app [7]).
13. I wrote a simple animation that shows the definition of sin and cos [8]. Most of the code is shown in that video, IMO it gives a good enough sense how to use it.
__Bottom Line Thoughts__
14. I think Pharo is a production-ready language for SaaS apps where you can easily scale by adding instances. I am not sure if it'd be production-ready for consumer facing web apps with many concurrent users.
15. It's an amazing language to create desktop applications for.
16. The debugger capabilities are awesome and there's active research on it. Time travel debugging is currently in its PoC phase (source: Pharo Days).
17. It's also a good language for live music making (source: Pharo Days where someone demo-ed some live coded acid music).
[1] We're hiring developers able to work in Europe and based in a European time zone. The way we use Pharo is IMO the real deal, it goes far beyond what any MOOC can teach you.
> Be careful of setting breakpoints here, you'll crash your image :P
I accidentally defined `=` on a class with an incomplete implementation. I think I saved it just out of habit.
Of course, this royally buggered the entire image because the class browser evaluated `=` when trying to open that specific class.
I was doing this in Squeak rather than Pharo, but that was a scary moment when my mental model before that was to do a `git checkout` to revert the change. I think I ended up writing some code to find the method and delete it, before I realised there was a change browser where I could discard that implementation.
The biggest barrier to entry was really understanding the concept of the image and that you do all the work in the class browser, rather than building a hierarchy of classes that all branch out from a main class.
> before I realised there was a change browser where I could discard that implementation.
Haha nice!
There's also a changes file.
I guess I have another recommendation: Pharo is not just about the image. When you click on an image folder there are other files. Try to figure out what they do and how you could use it to your advantage. Check out what the pharo-local folder does for example. The iceberg folder there, for example, directly hosts the .git folder. I've used it at times if I couldn't get whatever I wanted to do in Iceberg to work (e.g. adding a remote). The .sources file shows the changes that you have.
For sure. I started off with Squeak so I eventually got the git package installed and, in addition to updating the image and the changes file, it also wrote out the classes I added to disk - basically a directory being the class and every file inside the directory being the implementation of a method.
It's hard for me to see what I'd deploy this way, as a pet project, but it's teaching me a lot more about a nice way to do OOP that also gives you an environment in which OOP can thrive.
A bit late to the show, but I'm trying to use Pharo on an M1.
It runs for sure, however the interface is blurry. Seems like it doesn't know how to render hi dpi. Is there some setting to get that right?
IMO the better approach, given how dynamic and fast-moving development is, is to put this kind of information directly into the image. Pharo even already has a mechanism for authoring tutorials and it has an example tutorial using that mechanism, but I wish there was just more to that content that covered the material in the books/MOOC.
The reason is then hopefully the documentation stays aligned with the state of the particular image you're working with rather than hoping some volunteer has updated a book for the particular version of Pharo you're using.
I took a stab at doing this myself, but I was learning as I went and eventually ran out of gas since writing Pharo tutorials didn't beat other stuff on my personal priority list.
The problem I have with Pharo is that there are hundreds if not thousands of types — each with their own fiddly interfaces. I don't have time for that. I much prefer the Clojure philosophy of having few types that can do it all.
Then, there were a few problems that were specific to Pharo. Pharo went through a couple different package systems and the different package systems don't necessarily have the same packages. Pharo has had major breaking changes in their GUI toolkit, so if you found a package that did exactly what you wanted and were able to install it, it just wouldn't work.