Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Do any languages specify package requirements in import / include statements?
40 points by foobarbecue 5 months ago | hide | past | favorite | 87 comments
When coding small programs in python, js, java, C++ it often feels to me that the dependency requirements list in pyproject.toml, requirements.json, maven.xml, CMakeLists.txt, contains information that is redundant to the import or include statements at the top of each file.

It seems to me that a reasonable design decision, especially for a scripting language like python, would be to allow specification of versions in the import statement (as pipreqs does) and then have a standard installation process download and install requirements based on those versioned import statements.

I realize there would be downsides to this idea. For example, you have to figure out what happens if different versions of a requrement are specified in different files of the same package (in a sense, the concept of "package" starts to weaken or break down in a case like that). But in some cases, e.g. a single-file python script, it seems like it would be great.

So, are there any languages whose standard installer / dependency resolvers download dependencies based on the import or include statements?

Has anyone hacked or extended python / setuptools to work this way?




In perl, you can do

    use Some::Library v1.2.3;
and there's assorted tooling that will turn that into some sort of[0] centralised dependency spec that installers can consume.

Also e.g. https://p3rl.org/lib::xi will automatically install deps as it hits them, and the JS https://bun.sh/ runtime does similar natively (though I habitually use 'bun install <thing>' to get a package.json and a node_modules/ tree ... which may be inertia on my part).

Th e perl use is a pure >= thing though, whereas I believe raku (née perl6) has

    use Some::Raku::Library v1.*.*;
and similar but I'm really not at all an expert there.

[0] it's perl so there's more than one although META.json and cpanfile are both supported by pretty much everything I recall caring about in the past N years


Go goes at least part-way there. https://golangbyexample.com/go-mod-tidy/ https://matthewsetter.com/go-mod-tidy-quick-intro/ You write your module source. You then run go mod tidy. This reads your sources for imports and automatically creates the go.mod and go.sum files What's nice about this is that it ensures reproducible builds, so you should add those files to your revision control repo.


Thanks! Sounds like it's time for me to try go.


Give it a go! (:


As for me - it's a no go.


Go and give rust a go instead


If you prefer a good package manager over a language which avoids you to be able to shoot yourself in the foot with nil pointers…

Package management is important, but other stuff might be more important.

That said: I enjoy go. Is it perfect? No! It feels like the old days with python. I expected a lot more from the golang type checker. Lots of basics are not there (reverse a string? Write your own function. Etc.)


Python has None, which frequently caused me problems in production, so that's not different, except Go can at least tell between a string and nullable string pointer.

Reversing a string is not a basic operation. A) why would you ever need to do it in the real world? B) reversing Unicode is non-trivial due to composing characters. There are packages available for Go that implement grapheme segmentation. If you need it, you can import one.


Integer formatting requires string reversing if you don't precompute the digits or work backward in a second buffer. It becomes necessary when standard formatting routines are too heavyweight for resource constrained systems.


Nearly every time I’ve had to reverse a string, it’s been for a coding interview.


Yeah, that's a really bad example... many environments now don't have a "string reversal" anymore, and many of the ones that do are the old & busted "string reverse" that doesn't work with Unicode, so they aren't really "string reverse" but just a legacy function that can't be removed anymore that is now technically grotesquely mislabeled. A modern-day string reverse is barely even defineable; technically, you have to pin it to a specific version of the Unicode standard to be well-defined because the next standard is likely to add characters that your string reverse would need to know about to work properly, but can't since they don't exist yet.

In some sense the best answer to "write me a string reversal function" in an interview is to say that no such function even exists anymore; I can write you a byte reversal function if you like, or we can sit down and hammer out a definition of some function I can write for you but it won't necessarily be a "string reverse". Best you can do nowadays is iterate on "graphemes as defined in this specific Unicode standard" and reverse those, but IIRC even that has some pathological edge cases, some of which are not really resolvable. Human languages as a whole are quite quirky.


Deno can directly import in TS/JS from URLs, really nice for small 'shell scripts', but has some considerable downsides for bigger projects:

https://deno.com/blog/http-imports

PS: also Godbolt's C/C++ compilers can directly #include from URLs, I guess they run their own custom C preprocessor over the code before passing it on to the compilers:

https://www.godbolt.org/z/6aTKo4vbM


> Deno can directly import in TS/JS from URLs

FWIW, that's a native JS feature (minus the TS part).


Yeah, not sure what Deno has to do with this, other than they implemented the standard at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Deno is the only server that has done so though. Neither node nor bun support them.


The HTTP imports astound me. Having little control over how bundlers fetch code randomly screams vulnerability vector. How people are okay with it is wild to me.


You're right, of course, but there's little practical difference from doing `npm install` unless you're actually auditing the supply chain. It just automates a step.


The difference is that you have a single file to audit with npm. With Deno, any file in your codebase might pull in a dependency.


Which isn't really a problem for simple one-file 'shell scripts'. For bigger projects, Deno already suggested to maintain all external imports in a central file.


That assumes someone is actually auditing the npm deps.


It’s trivial to audit your dependencies with https://socket.dev

Disclosure: I’m the founder.


Despite the ease of auditing services, there are notoriously a lot of devs using unaudited deps. Maybe they don't even think about it, unfortunately.


They should have at least have something like HTML Subresource Integrity[0], including a hash so at least changes to what comes back from the import hasn't changed.

[0] https://w3c.github.io/webappsec-subresource-integrity


Yes, though it's not enforced when it should be.


Cargo people are working on single-file "Rust script" support that would allow embedding the necessary parts of the manifest (Cargo.toml) directly in the .rs file as special doc comments (so things are transparent to rustc): [0]

[0] https://rust-lang.github.io/rfcs/3424-cargo-script.html


What cargo is doing is something like

    #!/usr/bin/env cargo
    ---
    [dependencies]
    regex = "1"
    ---

    fn main() {
    }
I think this is proposing something like

    #!/usr/bin/env cargo

    #[version = "1"]
    extern regex;

    fn main() {
    }
though `extern` has gone out of style and would instead be something like

    #!/usr/bin/env cargo

    #[version = "1"]
    use regex;

    fn main() {
    }
but that wouldn't isn't idomatic either and so you would have

    #!/usr/bin/env cargo

    #[version = "1"]
    use regex::RegexBuilder;
    use regex::Regex;

    fn main() {
    }

which runs into the problem noted but in one file

> I realize there would be downsides to this idea. For example, you have to figure out what happens if different versions of a requrement are specified in different files of the same package (in a sense, the concept of "package" starts to weaken or break down in a case like that). But in some cases, e.g. a single-file python script, it seems like it would be great.

To fully support this, we'd need a top-down compilation model like Zig so we could discover dependencies in the current project. Today, we have to do bottom-up compilation, knowing all dependencies a priori.

A downside to any of this is it is expensive to do any any dependency management

- Introspecting dependencies requires parsing every file in every part of your dependency tree

- Editing dependencies requires walking every file in your project


You can already do that with ruby bundler (which perhaps inspired it)

https://bundler.io/guides/bundler_in_a_single_file_ruby_scri...

But I recognize that isn't quite what the questioner is asking, because at least in ruby you are still going to need "require" statements (unless you have an autoloader) to actually load the code, the inline `gemfile` specifies your dependencies, which is different than a require/import statement to actually load code from possibly one of those dependencies.


In F# scripts you can add dependencies directly from Nuget like so:

    #r "nuget: NodaTime, 3.2.1"

    open NodaTime

    let now = SystemClock.Instance.GetCurrentInstant()

    printfn "%A" now


One of the best features.

Just import everything! Nuget packages, local dlls, other F# script files, you name it. It's so good. No extra ceremony. And when you open it in VS Code with Ionide - you get full support of the language server, documentation and syntax highlighting to know that your script is correct.

Night and day difference with standard scripting languages.


> other F# script files

This is where it breaks down a bit.

Importing the same script file twice leads to errors, so you must manually ensure exactly once imports across your dependency tree of scripts.

This is where C/C++ code reaches for include guards, but F# does not have a define pre-processor command (unlike C#).


Fair. I think if this issue takes place - it's probably time to switch from script files to a full .NET project. Or package a distinct piece of functionality as such.


There's no good reason FSI can't be improved to handle this though


If you're interested, you could further raise this on F# discord server or in https://github.com/dotnet/fsharp, F# is effectively a community-managed language so if there's a particular change you'd like to see, there is a high chance you can just make it happen if you have time to see it through.


Groovy has Grapes. https://docs.groovy-lang.org/latest/html/documentation/grape...

Groovy is a Java variant that runs on the JVM that allows you to add dependencies as annotations. I believe it uses Maven in the back-end, but it's just so convenient for scripts etc.


i can tell from practice it's really useful for single file scripting in JVM world.


Not exactly what you're talking about, but `uv` lets you specify dependencies in the header of scripts: https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...

I think what you describe really only makes sense for a single file script. I _do not_ want to manage dependency hell within my own source files.


This isn't `uv`-specific, this is part of PEP 723 – Inline script metadata https://peps.python.org/pep-0723/


The Raku Programming language allows one to specify the required version, the required authority and API level:

use Foo::Bar:ver<0.1.2+>:auth<zef:name>:api<2>;

would only work if the at least version 0.1.2 of the Foo::Bar module was installed, authored by "zef:name" (basically ecosystem + nick of author), with API level 2.

Note that modules can be installed next to each other that have the same name, but different authorities and different versions.

Imports from modules are lexical, so one could even have one block use one version of a module, and another block another version. Which is handy when needing to migrate date between versions :-)


Python’s PEP 723 (Inline script metadata) has a section summarising why they couldn’t take this approach under “Why not infer the requirements from import statements?”

https://peps.python.org/pep-0723/#why-not-infer-the-requirem...


Javascript does. For example:

  <script src=" https://cdn.jsdelivr.net/npm/lodash@4.17.21/lodash.min.js "></script>
Or using the new ES6 "import" syntax:

  <script type="module"> import lodash from 'https://cdn.jsdelivr.net/npm/lodash@4.17.21/+esm' </script>


That’s just the CDN’s convention, not a part of the language.


Versions aren't a concept in most languages at all, so they're bound to be imposed by some external system, be that NPM or CDN.

It's part of the language that you can specify a complete URI in an import statement, and a URI for a library really should include a version number somehow.


Oh yeah I was only referring to how the language specifies which deps you need and where you get them, not the versioning too.


You are mixing up program build arguments and program build parameters. In much the same way that a function has arguments, then you substitute actual parameters when calling it; you should view your build as having arguments, "imports", and parameters, "specific package versions", that you pass to the corresponding import.

Specifying a specific package version in your source directly would be like having a function with arguments, then removing one of those arguments and replacing it with a local variable with the same name that you hardcode to a specific value. It is a perfectly fine thing to do if that argument really should only ever have that specific value, but it is a fairly "fundamental" source code change; your function has fewer arguments and a hardcoded value now!

To be fair, as far as I am aware, no commonly used language seems to understand the distinction and syntactically distinguishes build arguments from build parameters. What you should have is a specific syntactic operation that specifies a argument that takes type "package", a separate, distinct syntactic operation that instantiates a specific package and binds it to a name, and a separate distinct syntactic operation that passes in a package instance to a argument. Then your build system is just instantiating specific packages and passing them as arguments to your files with imports.

In your case, you would then just be instantiating specific packages and assigning them to a "common" name. You would have no "imports" in this sense as you have no arguments, only "local variables", and thus the build system would need to do nothing as there are no "arguments" to your file. That or your build system still instantiates the packages, but as "global variables" assigned to names of your choosing, that you would then just reference in your contained file.


I don't see what's wrong with what OP is asking for. JS does this, but Python doesn't.

And about the parameters thing, often times your code will only work with a specific major version of some dependency. You could switch versions, but it'd require editing the code.


There is nothing wrong with it. The point I am making is that “import” is overloaded to mean two distinct things in most languages: declaration of build “arguments” and supplying build “parameters”.

It is like defining a function, f(x) and then asking for how to make x = 5 always. Well, you get rid of the parameter and create a local variable in the function named x and set it to 5. That is a perfectly reasonable thing to do.

But, it would be utterly absurd to make function argument declaration look exactly the same as local variable declaration and initialization. You should have distinct syntax for those two very different operations.

Unfortunately, for build arguments and parameters, every commonly used language I am aware of either only supports one form and punts the other form to the build system or attempts to do both using the same form. That is why it is a mess.

As for versioning, that is actually a “type” problem. When you take a parameter in a function you are usually expecting it to conform to some type or interface guarantees. Build arguments should do the same, but again, I am not aware of any languages that actually manifest the “type” of a package that can be used in a build argument. So, we are stuck with the wonderful world of untyped build arguments that we shoehorn in with “type annotations” (i.e. version ranges) in the build system.


ruby's bundler has an "inline" mode for this, it's mostly meaningful for single-file scripts, as you thought.

https://bundler.io/guides/bundler_in_a_single_file_ruby_scri...


This is Deno (JavaScript runtime). Package version and download location in the import path.

    import * as jose from 'https://deno.land/x/jose@v5.9.6/index.ts'


See Python PEP 722 – Dependency specification for single-file scripts

https://peps.python.org/pep-0722/


Yes! In Roc you specify all of your packages in your main.roc file, there's never a need for an external config file. It is extremely nice to always be able to run stand alone files. https://www.roc-lang.org/ An example: https://github.com/isaacvando/rtl/blob/main/rtl.roc#L1


Dhall has imports from URLs, much like Javascript. From their tutorial:

  {- Need to generate a lot of users?
  
     Use the `generate` function from the Dhall Prelude
  -}
  
  let generate = https://prelude.dhall-lang.org/List/generate
  
  {- You can import Dhall expressions from URLs that support
     CORS
  
     The command-line tools also let you import from files,
     environment variables, and URLs without CORS support.
  
     Browse https://prelude.dhall-lang.org for more utilities
  -}
  
  let makeUser = \(user : Text) ->
        let home       = "/home/${user}"
        let privateKey = "${home}/.ssh/id_ed25519"
        let publicKey  = "${privateKey}.pub"
        in  { home, privateKey, publicKey }
  
  let buildUser = \(index : Natural) ->
        {- `Natural/show` is a "built-in", meaning that
           you can use `Natural/show` without an import
        -}
        makeUser "build${Natural/show index}"
  
  let Config =
        { home : Text
        , privateKey : Text
        , publicKey : Text
        }
  
  in  {- Try generating 20 users instead of 10 -}
      generate 10 Config buildUser


Deno does this.

They ended up adding dependency files later on to make it possible to keep package versions in sync without changing every file on every version change.


A while ago I have implemented something similar to that in Python, although specifying versions requires using function calls instead of imports. Turns out in Python you can execute arbitrary code during imports via hooks, including calling out to pip to install a dependency.

https://github.com/miedzinski/import-pypi


Common LISP seems like it would be a prime language to do something like this. Since one of the core tenets of the language is that all code is data, no?

That said, I'm not sure it is exactly what you are asking. They still somewhat expect you to have the system defined in a central spot as far as what all you depend on. Which I think you are almost always going to want.

That is, if you want individual files to be able to specify dependencies, that is probably doable; but you are almost certainly going to want something to pull those up to a central spot. If only so that you can work with it as a cohesive unit?

I get that you can feel these are somewhat redundant. But they are also meaningful? You could, similarly, not use any imports on a Java program and just fully qualify names as you use them. Such that imports could similarly be seen as redundant/unnecessary. At some point, you will want a place to say what names can and cannot be used in execution. Which ultimately puts you back in the same game.


>Common LISP seems like it would be a prime language to do something like this.

asdf systems can specify a version of a system it depends on.


Right, but I don't know anyone that has a preprocessor like setup for a codebase to automatically build your depends-on list. Which, is probably doable?


>it often feels to me that the dependency requirements list in pyproject.toml, requirements.json, maven.xml, CMakeLists.txt, contains information that is redundant to the import or include statements at the top of each file.

It doesn't. The name that you use for a third-party library in software generally isn't remotely enough information to obtain it, and it would be bad to have an ecosystem where it were - since you'd be locked in to implementations for everything and couldn't write software that dynamically (even at compile time) chooses a backend. On the other hand, many people need to care about the provenance of a library and e.g. can't rely on a public repository because of the risk of supply-chain attacks. Lockfiles - like the sort described in the draft PEP 751 (https://peps.python.org/pep-0751/), or e.g. Gemfile.lock for Ruby) - include a lot more information than a package name and version number for that reason (in particular, typically they'll have a file size and hash for the archive file representing the package).

>It seems to me that a reasonable design decision, especially for a scripting language like python, would be to allow specification of versions in the import statement (as pipreqs does) and then have a standard installation process download and install requirements based on those versioned import statements.

It's both especially common for naive Python developers to think this makes sense, and especially infeasible for Python to implement it.

First off, Python modules are designed as singleton objects in a process-global cache (`sys.modules`). Code commonly depends on this for correctness - modules will define APIs that mutate global state, and the change has to be seen program-wide.

Even if the `import` syntax, the runtime import system and installers all collaborated to let you have separate versions of a module loaded in `sys.modules` (and an appropriate way to key them), it'd be impractical for different versions of the same module to discover each other and arrange to share that state. Plus, library authors would have to think about whether they should share state between different versions of the library. There are probably cases where it would be required for correctness, and probably cases where it must not happen for correctness. And it's even worse if the library author ever contemplates changing that aspect of the API.

Second, there's an enormous amount of legacy that would have to be cleaned up. Right now, there is no mapping from import names to the name you install - and there cannot be, for many reasons. Most notably, an installable package may legally provide zero or more import names.

I wrote about this recently on my blog: https://zahlman.github.io/posts/2024/12/24/python-packaging-... (see section "Your package name that ain't what you `import`, makes me frustrated").

Third, Python is just way too dynamic. An `import` statement is a statement - i.e., actual code that runs when the code does, a step at a time, not just some compile-time directive or metadata. It can validly be anywhere in the file (including within a function - which occasionally solves real problems with circular imports); you can validly import modules in other ways (including ones which bypass the global cache by default - there are good system architecture reasons to use this); and the actual semantics can be altered in a variety of ways (the full system is so complex that I can't even refer you to a single overall document with a proper overview).

> For example, you have to figure out what happens if different versions of a requrement are specified in different files of the same package (in a sense, the concept of "package" starts to weaken or break down in a case like that).

As I hope I explained above, it's even harder than you seem to think. But also, this would be the only real benefit. If you want to have multiple files that always use the same version of a library, then it makes no sense to specify that version information repeatedly. (Repeating the import statement itself is valuable for namespacing reasons.)

> But in some cases, e.g. a single-file python script, it seems like it would be great.

Please read up on PEP 723 "Inline script metadata" - https://peps.python.org/pep-0723/. It does exactly what you appear to want for the single-file case - but through comment pragmas rather than actual syntax - and is supported by Pipx and uv (and perhaps others - and is in scope for my own project in this general area, Paper).

> Has anyone hacked or extended python / setuptools to work this way?

Setuptools has nothing to do with this and is not in a position to do anything about it.


Wow, thank you for taking the time to write this comprehensive answer! In the python context, everything you wrote makes sense to me, and I do have more to learn here. I had thought about imports lower down in a file, but I had forgotten about global state mutation.


Fortunately for you: I routinely search HN for posts about Python, and my current projects are a build backend (bbbb), an installer/environment manager (Paper), and writing about Python packaging on my blog.


By the way... as helpful as you've been here, I thought I should let you know you come off as rude and arrogant, in case you aren't aware of it. I'm not personally offended at all, but the "naive" and "fortunately for you" comments are the type of thing that can make people ignore the substance of what you're saying, which I doubt is your intent. I know this sort of interpersonal navigation can be a pain in the butt.


It wouldn't be at all the first time I've heard it. Thanks for the specific hints.

A fair amount of the time, I do notice that my writing could have an unintended, uncharitable reading like that, but I either can't think of a good way to improve it or (more commonly) just don't feel responsible for doing so. Tone of voice is notoriously hard to communicate; even explicit markers (such as smilies) are often more easily interpreted as sarcastic or insincere.

I especially don't like adding a lot of words to try to avoid giving that impression - because when other people do it, I often feel like they're wasting my time with all the extra words I have to read. (But of course, they don't know that I was already willing to read them charitably....)


You are of course not responsible for typing out nice messages. Just like nobody is responsible for reading what you write. :)

It may be of interest if you want more people to look at Paper or bbbb though. But that may also not be what you want, or attract users you don’t want. We all make software for a variety of reasons!


> I either can't think of a good way to improve it or (more commonly) just don't feel responsible for doing so.

Change naive to beginner in all your writing. Naive brings a negative connotation with it always (unless you truly intend to insult the other person).

And you could just drop "fortunately for you" since it adds nothing to the statement and technically takes longer to write than just dropping it.


I feel like "beginner" is at least as insulting, especially if it turns out not to be accurate. But the point is well taken - I tend to overthink this instead of just making simple changes.


If you change "Fortunately for you ..." for "I routinely search HN for", nobody would need to read it *extra* charitably.


PEP 723 looks great! I'll give Paper a try one of these days.


Looking through https://dbohdan.com/scripts-with-dependencies, I see

- JS: Deno and Bun

- Scala: Scala CLI and Ammonite

- Python: fades (https://github.com/PyAr/fades)


Scala has Ammonite (https://ammonite.io/#IvyDependencies)

    import $ivy.`com.lihaoyi::scalatags:0.7.0 compat`, scalatags.Text.all._

    val rendered = div("Moo").render


Scrapscript[0] comes to mind:

> Any chunk of the language can be replaced with a hash.

> These chunks are called “scraps”.

> Scraps are stored/cached/named/indexed in global distributed “scrapyards”.

[0]: https://scrapscript.org/


Some do (Go), but it is really not the best idea. You do want to have one central place where you specify:

- version constraints - source of packages (you may want to host them yourself one day) - any additional metadata (some packages have options or features)


It does sound painful to have to update a package version by updating an import string in hundreds of files


It is not painful in Go.

For example… let’s say you have just plain search-replace and no smart tools. You need to update github.com/abc/def to github.com/abc/def/v2. This is a search-replace operation.

This only happens when packages publish breaking changes. The minor version is stored in go.mod.


You can do it using gofmt, by writing

gofmt -w -r '"github.com/abc/def" -> "github.com/abc/def/v2"' .

That won't do subpackages though.


Now add complex versionning and optional dependancy and dev level dependancy and... You'll get what he means.

Every language starts the way you describe, then needs are added.


I’ve been a Go programmer for many years now, and I don’t “get what he means”. Am I stupid? Maybe you could elaborate.

What is complex versioning? In Go, you specify a minimum minor version, and breaking changes change the package name & import path (or they’re supposed to). Why would I want complex versioning? The underlying problem that versioning solves is complicated, but that doesn’t mean you benefit from a complex versioning scheme.


#define SDK_VER 1.2.3


This is how Go worked in the old days, before modules.

Nowadays, the version constraints are specified in go.mod. Because the fully-qualified package names are used to import them, you can reconstruct go.mod from the sources, assuming that you don’t care about version numbers.

The source code says which packages, the go.mod file says which versions. (Major versions have different import paths.)


For finl, I’m expecting that imports/package requirements will be one and the same, but that only works because as a document language, there is a single source file. The basic idea is that a user would write, e.g.,

     \LoadExtension{provider:dependency:1.0}
or

     \DocumentFormat{ams:ams:1.0}
to load the specific version of an extension or format, with this then used to fetch the code from a remote repository if a copy isn’t already cached.

But for something like loading a dependency for Java or rust, I don‘t think something like this would make sense. Or maybe I’m just too accustomed to the Maven way of doing things.


It would be great to eliminate manifest files but Dependency Confusion, supply chain attacks and malicious project takeovers are a huge security challenge right now.


TCL’s “package require” statement doesn’t go out and fetch the packages from the ‘net—they have to be installed locally. But it does let you specify constraints on the version of the package to use, everything from use any version after 1.2.3 to use any version between 1.2.3 and 2.3.4 to use only version 1.2.3.


You can get a similar effect in Elixir scripts with Mix.install/2: https://github.com/wojtekmach/mix_install_examples

It supports git-links as well as package names in repos and versions.

Also happens to be a very nice and capable scripting platform, with a reasonably small runtime.


Yes! Small scripts should live in a single, self contained file.

In Firefly, you can specify dependencies at the top of the file:

https://www.firefly-lang.org/

When your project grows, you can move the dependencies to a single shared project file.


Go supports this with `go run`, e.g.

    # Example 1: Running the latest version
    go run github.com/rakyll/hey@latest

    # Example 2: Running a specific version
    go run github.com/rakyll/hey@v0.1.4


In Java I use jbang https://www.jbang.dev/


Yes, I feel the same way. Anything less than having the compiler/interpreter automatically resolve `import <URL>` is idiotic and a massive waste of everyone's time.

"Modules" are a stupid over-engineered concept that is significantly worse in every way to includes that can be qualified/namespaced.

The fact that so many programmers defend this pointless nonsense is one of the many reasons I can't take this industry seriously anymore. "Engineers" overcomplicate loading a text file.


Another disadvantage would be that you'd have to run the code to figure out the dependencies, no?


In a compiled language, this can be another step before the compiling and linking.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: