Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fortran 2023 (iso.org)
184 points by hardmaru on Nov 22, 2023 | hide | past | favorite | 123 comments


I learned Fortran for a summer research job in college about 15 years ago. It was my first programming language and I used it to do some population modeling. I found an old Fortran book from when my dad went to school back in the early 70s and learned it from that which was a cool experience - I just had to skip over all the parts about entering your code into punch cards :D


> I just had to skip over all the parts about entering your code into punch cards

A few years ago I worked with a scientist who used a huge Fortran program with a long history. He described the lines in a text file that configured the program: "On the first card you put ... on the next card you put ..." They still call them cards! He was born after actual punch cards went out of use.


> I just had to skip over all the parts about entering your code into punch cards

Haha I bet that was a fun and weird experience.


A shame to skip the IBM 029 experience. (separate issue from drum cards with a skip)


Besides the technical part, Fortran's community has evolved a lot as well since 5-10 years. See the community paper here: https://arxiv.org/abs/2203.15110 "The state of Fortran" (not an author myself)


F’23 doesn’t have much in it that’s new over F’18, which itself was also a pretty minor update to F’08.

One thing to watch out for: in a departure from former commitments to compatibility, F’23 changes some semantics of working conforming code, specifically allocatable characters being used for internal writes, iomsg=, &c.


It must be hard to update a language that was already perfect in 1990.


F'90 lacked allocatable derived type components, but was otherwise very well thought out, and its major contributions (modules, free form, internal procedures, generic interfaces, ELEMENTAL, array syntax, POINTER/TARGET...) have held up well. Kind of a mixed bag since then, frankly (FORALL, LEN type parameters, FINAL, ENUM, DO CONCURRENT...)


Fortran first appeared in 1957... It's probably older than most people here on HN.


It's just 34 more yeats until F'57 becomes ambigous ..


I think you’ve made a typo,

> yeats

It is yeets, two e’s no a.


As the other comment pointed out, F90 is surprisingly good considering it is also probably older than most people on this site, haha.


"The first significantly widespread high-level language was Fortran."

https://en.m.wikipedia.org/wiki/High-level_programming_langu...



The "new" Fortran for me was f77.


> allocatable characters being used for internal writes can you explain this ?


If you allocate a character object, then write to it with an internal write statement, F’23 now requires that the variable be reallocated to the actual length of the formatted record, instead of retaining its former length and being padded out with blanks (or truncated).

The LLVM Fortran compiler (Flang) has warnings for various usages whose semantics would change if F'23 semantics were adopted by default, which I'm not sure I want to do.


I've never used fortran so I'm not sure what a character object or an internal write is, but in C would this be something like allocating a character array and sprintf-ing to it?


That's a good analogy, with the change being that sprintf is now somehow required to reallocate your buffer on every call if the length ever changes.


For a summary of changes to the language, see John Reid's slides here: https://fortran.bcs.org/2022/AGM22_Reid.pdf

More info, The Home of Fortran Standards: https://wg5-fortran.org/


https://wg5-fortran.org/N2201-N2250/N2212.pdf is John Reid's complete document.


as a funny note John Reid was the singer of Nightcrawlers in the 90s.


That is funny. Interesting career trajectory.


Dang ternary operator. I wish ternary was "open":

    X ? A : B : ... Z;
Where the value of each expression counted down from A..Z, ie, of there's 8 limbs, then A is 7, B is 6, ..., Z is 0.


you want a one-line switch expression??


Yes; not often, but, yes.


Sounds similar to computed GO TO, as in:

      GO TO (LABEL1, LABEL2, LABEL3, LABEL4) I
https://docs.oracle.com/cd/E19957-01/805-4939/6j4m0vn9l/inde...


Which FOSS compilers support it?


If "it" is F'23, then none. GNU Fortran has had the "new" degree-unit trig functions for a while, but no compiler, FOSS or otherwise, has the newly invented features of this revision.

Fortran doesn't prototype features with real implementations (or test suites) before standardizing them, which had led to more than one problem over the years as ambiguities, contradictions, and omissions in the standard aren't discovered until years later when compiler developers eventually try to make sense of them, leading to lots of incomplete and incompatible implementations. I've written demonstrations for many examples and published them at https://github.com/klausler/fortran-wringer-tests/tree/main .


flang [1], it's part of the LLVM project.

[1] https://github.com/llvm/llvm-project/tree/main/flang


GFortran, Flang and LFortran are all open-source compilers that support modern Fortran.


Hooray! But I’ll stick to fortran77


true to the username!


Common blocks for the win.


I can write Fortran 77 in any language!


You be one good Blub programmer.

pg can write Lisp in any language!

Heck, pg can write any language in Lisp!

pg can even write On Lisp!


hmm, wonder which fields people are using fortran now ?

did a search, ha

- numerical weather prediction,

- finite element analysis,

- computational fluid dynamics,

- geophysics,

- computational physics,

- crystallography and computational chemistry.


There is a directory of github projects that use Fortran: https://github.com/Beliavsky/Fortran-code-on-GitHub

I'll add that it's well suited to modern AI inference and several projects exist, e.g.

https://github.com/nlpodyssey/rwkv.f90

https://github.com/certik/fastGPT

https://github.com/rbitr/llm.f90 (disclaimer, mine)



I wonder exactly why that is, though. I imagine it has more to do with historical use of Fortran in those fields than anything that Fortran specifically has to offer. There's no real reason to switch to another language, I suppose, except that you're limiting yourself to a much smaller pool of talent. Maybe learning Fortran "proves" one is passionate enough about one of these fields? Or requiring Fortran keeps out hordes of undisciplined JavaScript developers.

EDIT: Wow, I didn't expect so many informative and reasonable replies. Kinda makes me want to check out Fortran. Thanks y'all!


I use Fortran for some research codes (I also use Julia and C++). The main advantage of Fortran, to me, is that it is very easy to write very fast code, and has very few foot-guns. The latter property also means it can be quite difficult to do non-numerical stuff, but that's a great trade-off if you're a physicist or engineer wanting to write simulations. The name comes from "Formula Translation" for a reason. Doing the kinds of multi-dimensional array manipulation and linear algebra you can do with out-of-the-box Fortran in C or C++ feels like torture.

I generally prefer Julia, as its a more general-purpose language, but there are parts of Fortran I like better than Julia, such as

- Fortran uses static typing and is statically-compiled

- It's a lot easier to write slow Julia code than I'd like, and you generally need to think more to make code fast than you do in Fortran.

However, I think Julia beats Fortran in most everything else. My main gripe with Fortran these days is 1) lack of a decent default package manager (FPM seems great, but most Fortran codes don't use it) and 2) slow evolution due to the conservative standards committee. I don't understand why we still can't have extremely basic generics in 2023, or a simple string type.


I've used Fortran for commercial engineering codes. Numerical computing is moving towards layered intermediate representations (IR) in the compiler and runtime to better specialize on the inputs and target hardware. The compiler passes are becoming the focal point and the languages are becoming clients. Fortran's future is probably as a domain-specific language alongside other languages (eg Julia) targeting a common IR (eg MLIR). I don't think we'll see any big changes in the language until it's adapted to fit into the emerging ecosystem.


C and C++ and most modern programming languages have very poor support for arrays in comparison even with the ancient versions of Fortran, especially for multi-dimensional arrays.

In C++ it is possible to define adequate types and operations for good support of multi-dimensional arrays, but this means that a user must choose some additional libraries to get the support that does not exist in the base language, unlike with Fortran.

C++ can be much better than Fortran for scientific computing, but only after a significant initial effort of establishing a well-specified programming style, based on selecting an appropriate non-obsolete subset of C++ and after selecting carefully a set of libraries that provide all the missing features.

For someone whose main job is not programming, using Fortran can be simpler.


C++ Eigen has become the defacto replacement for arrays for matrices and vectors. I am thinking this should make it more competitive w fortran in this domain.


Pick out the even (i,j where both even) elements from a matrix

    m[0::2, 0::2]                        // Numpy
    m(0::2, 0::2)                        // Fortran [1]
    m(seq(0, last, 2), seq(0, last, 2))  // C++ Eigen
[1] If you have declared the array to start indexing from 0.


One big thing I remember is that it's illegal to have multiple array arguments reference overlapping storage, so storage regions can't alias. This means functions can be optimized more aggressively.

My fingers can't type Fortran anymore, so I'm going to use C as an example.

Imagine you have this function:

    /* Add entries in arg1 to arg2, put result in result */
    void add_vec(int arg1[], int arg2[], int result[], int length) {
        for(int i = 0; i < length; i++) { 
            result[i] = arg1[i] + arg2[i];
        }
    } 
In C, it's perfectly legal to call this like so:

    int array[101];
    // ...pretend array is initialized...
    // for the first 100 elements, set a[i] = a[i] + a[i+1]
    add_vec(array, array+1, array);
The C compiler has no choice but to iterate through the array one element at a time, doing things in the exact order that the source code spells out. No loop unrrolling or vectorization can occur.

In Fortran, the compiler knows that the arrays are not aliased, so it can do much more reordering, unrolling, and vectorization to speed this code up.


And this is why it's faster than C some-days for some-cases etc. but still actually faster.

Is there much going on with Fortran on GPU?


After 30 seconds of furious googling: https://developer.nvidia.com/cuda-fortran


TIL that the restrict keyword is part of C99 but not C++ also that it is not necessary in Fortran.


In Fortran you can write high-performance array code without getting lost in a maze of pointers. As most of the code is written by academics, the talent question is pretty immaterial, there is no money to compete for top-level software engineers anyway.


Modern C++ has obviated almost all use cases for pointers. Smart pointers handle 90% of the cases where I would use a ptr. Raw ptrs are just for transient weak ptrs.


I've never used modern C++, but I'd be interested to try it just to see what the overhead is in getting started writing some scientific code. When I was in grad school I used mostly Matlab, in which you can write matrix math without really having to learn any "programming". Same for numpy and Fortran. Definitely not for C. What about C++? Even the fact that pointers are being used at all would make it less attractive for a grad student who just wants to multiply some matrices.


The problem with modern C++ is that there are several meanings of modern, and lots of classical libraries that aren't being rewritten.


Fortran can be faster than C because it has a more restrictive memory model (compiler can make more optimizations when it knows nothing else is aliasing some memory)


Fortran can’t be faster than C; you can write inline assembly kernels in C, you can add keywords to promise no aliasing, etc etc.

Fortran just has better syntax and defaults than C for this stuff. A devoted, expert C tuner with unlimited time on their hands can do anything. A grad student with like a year of experience can write a Fortran code that is almost as good, and finish their thesis. Or, a numerics expert can write a numerical code for their experiments and be reasonably sure that they are operating within a good approximation of the actual capabilities of their machine (if you are an expert on numerical computing and C+assembly, you can write a library like BLIS or gotoBLAS and become famous, but you have to be better than everybody else in two pretty hard fields).

IMO this is important to point out because somebody can bring a microbenchmark toy problem to show C beating Fortran easily. As long as they spend way more effort on the problem than it deserves.


I don't think anything here is in conflict with the statement that Fortran can be faster than C in some cases, but yes you're right that technically that should be qualified as naive Fortran can be faster than naive C


There is big difference here between portable ISO C vs C with compiler extensions and platform specifics.


You can inline assembly in a lot of languages. However I’d argue that doing so is thus no longer writing code in that host language.

To put it another way, your comment is a little like saying Bash is just as fast as C because you can write shell scripts that inline assembled executables.


There was another discussion here comparing the two recently: https://news.ycombinator.com/item?id=37595240


Is there no way to apply such optimizations to C code as long as the developer adopts certain standards in writing code?


In principle yes, in practice, no. This is one of those places where our developer intuition fails us, and I fully include mine. It feels like it ought to be feasible, even now, but in practice it turns out that there are just so many ways to screw up that optimization without realizing it, with responsibility for it scattered between compilation units (which is to say it isn't even necessarily clearly one things fault, these can arise in interactions), and once it creeps in just a little bit it tends to spread so quickly, that in practice it is not practical to try to exclude the problems. You can't help but write some tiny crack it sneaks into.

This is part of why I'm in favor of things like Rust moving borrow checking into the langauge. In principle you can statically analyze all of that in C++ with a good static analyzer, in practice, it's a constant process of sticking fingers in broken dikes[1] and fighting the ocean. Sometimes you just need a stronger wall, not more fingers.

[1]: https://writingexplained.org/idiom-dictionary/finger-in-the-...


The main issue is that C doesn't have a way to pass arrays of any size to functions while preserving their type (the size is part of an array's type in C); by convention, one generally passes a pointer to its first element and a separate length parameter. A compiler cannot know that any two pointers do not point to the same location. Hence, it's harder to vectorize code like this, because each store to result[i] may affect arr1[i] or arr2[i]; in other words, the pointers might alias. In C89:

  /* They're all the same length */
  void add(float *result, float *arr1, float *arr2, size_t len)
  {
      size_t i;

      for (i = 0; i < len; ++i)
          result[i] = arr1[i] + arr2[i];
  }
The solution is supposed to be the "restrict" keyword, which informs the compiler that other pointers do not alias this one. It was added in C99. You declare a pointer that doesn't alias like this:

  float *restrict float_ptr;
If a restrict pointer is aliased by another pointer, the behavior is undefined.

https://en.wikipedia.org/wiki/Restrict

It's hard to judge the extent to which this helps. Apparently, when Rust annotated all its mutable references and Box<T> types with the LLVM equivalent of the "restrict" annotation, they exposed a lot of bugs in LLVM's implementation of it, because most C code doesn't use "restrict" pointers as extensively as Rust code uses mutable references and Boxes.


Theoretically with `restrict`, but that doesn't really help. But the biggest advantage of Fortran is it's support for multidimensional arrays, slices,...


Partially historical inertia, partially the fact that commercial Fortran compilers have had decades to optimize numerical code to within an inch of its life.

Also, for at least the older versions of it, its a pretty small language. If you have a grad student who's already familiar with programming, they can probably teach themselves Fortran and start being able to make useful changes to your research code in a week or two. Compare, say, C++ where just learning the language enough to be useful could take most of a semester.


Fortran is closer to a highly optimized Matlab than a general purpose language like Javascript. It excels at matrix and vector calculations and is simple enough of a language that getting something working isn’t too difficult.


It's been about 30 years since I last wrote FORTRAN (77), but back then it was used in computationally intensive applications because it was (or at least felt) like working much closer to the bare-metal data and code segments of a process than a 'high-level' language like C. Certainly the compilers produced far fewer lines of assembly/line of source compared to C, pushing the burden of correctness back on the programmer. It was almost like working with an assembly macro language.

I suspect much has changed since though, so its fields of use may just be convention now.


I vaguely remember early fortran code was really close to 1:1 source code -> assembly language. Probably just because that's how low-level compilers were.

I also remember from the era, that a one-line program with a '.' in column 6 would generate 600 lines of errors from the IBM optimizing cobol compiler.


If you’re writing a program or a library to perform fast arithmetic computation over large numeric arrays, Fortran is the optimal tool for the job.


Think of FORTRAN as of a GPU shader language, or something like numpy. It's heavily leaning towards massive vector processing, usually automatically parallelized, and the compiled code runs very fast. It gives you a lot of tools for doing numerical stuff straightforwardly and efficiently at the same time. It's not a very ergonomic general-purpose language, but in its area it shines.


In a lot of cases the libraries are developed by researchers and academics in physics, engineering, biology, and other sciences. Having a huge pool of CS graduates isn't very helpful, except for the subset getting Phds in one of those other fields.


How easy are strings and associative arrays in Fortran?

My experience working with particle physics code (which is generally C++) is that we could probably make it a lot faster if we completely banned the use of `std::map<std::string, T>`. But since it's there, and since people use it to e.g. look for files or parse configuration, you often find an API that accesses elements in an array by string rather than by index. Sure, it just made your code 20x slower, but it's only one part of much bigger framework and this wasn't on the "hot path" anyway, so you just use it.

Rinse and repeat a thousand times, you can see how things slow down a bit.


Odd that so far nobody has mentioned nuclear weapons.


LLNL has ported most if not all of their codes to C++, but I think LANL is still full on Fortran


Every weather prediction model, and every climate model, is in Fortran.

Caltech has a project attempting to write a new climate model in Julia:

https://github.com/CliMA/ClimaCore.jl

https://clima.caltech.edu/


There are a few other active projects porting weather, ocean, and climate models to Julia, C++, or Python.

Getting a weather or climate model from zero to production grade requires approximately 100 person-years, or $20M (personal experience). Because of extremely high scientific expertise needed to correctly implement such models, it's more difficult to leverage open source contributions from a broader community, like it is with web or ML frameworks. So most of the development in these projects is done by full-time hires, and to a lesser extent by contributions from early adopters.

The key technical arguments that I hear/read for such transition projects are ease of portability to accelerators (e.g. GPUs), and higher availability of programmers in the workforce.

My intuition is that a $20M budget, if carefully allocated to compiler and hardware accelerator teams, could solve running Fortran on any accelerator in its entirety.

With Fortran's tooling and compiler support for offloading standard Fortran to accelerators slowly but surely improving over time, the rationale for porting to other languages becomes increasingly more questionable and is being revisited.

But regardless of technical arguments, it's a good thing for large models and frameworks to be written and explored in diverse programming languages.


Coding for GPU is more than language support. I'm currently rewriting a CFD engine that is implemented in Fortran with computer shaders and there are several cases where the logic of the code has to be rebuilt from scratch to fit the GPU model.

Also, I understand the only Fortran compiler that supports GPU is the one from nvidia which is proprietary. I prefer to rely on open source for a code base that will last at least ten years...

But reading this HN's thread, I understand that Fortran is more alive than I thought. How many new developments are done with Fortran ? I mean, to me, Fortran is a bit like Cobol: it is so entrenched that, obviously, it still have a lot of activity but the momentum is moving towards more modern languages... But, well, that's all guesses an impressions...


For example the Exclaim project at ETH Zurich trying to rewrite ICON in python (granted, they use special decorators that JIT to GPU instructions). It looks like a bit of a fools errand to me to try an write such a complex and performance -sensitive software in a dynamic language. I wonder why Chapel hasn't seen widespread adoption yet


The recent "Neural General Circulation Models" pre-print [1] from Google Research indicates that the team built a spectral dycore from scratch using JAX; in Appendix A they note that it comes in at just over 20,000 lines of code! This model isn't quite "production grade" from the perspective of NWP (lacks microphysics and therefore any prognostic precipitation in the forecast), but it's the only project I can think of that has rapidly produced such a atmospheric model "from scratch" with a high degree of engineering rigor.

[1]: https://arxiv.org/abs/2311.07222


there is a fairly major assumption being made here which is that none of the $20M cost is coming from Fortran. Part of the reason for porting the models is ease of development and if you make development easier, that $20m can go down pretty quickly.


As a real-world counter-example, CLiMA (Julia) lists 54 people on their Sci&Eng team, and the project has been going for several years now. It is far from production ready. Granted, they tackle everything at once (atmosphere, ocean, data assimilation, machine learning etc.). I'm a big fan of the project and wish it the best. This is to illustrate what a huge endeavor it is to build a system like this, regardless of the implementation language.


Developing projects like this in Fortran is not any more difficult than other languages. I can't speak to Julia because I don't understand it well enough, but in my experience writing this kind of code in Fortran is more natural and easier than in Python + numeric/array/accelerator libraries. The key unfulfilled need is to have Fortran compilers do offloading efficiently for you. Then, there's no comparison.


I did my PhD at Caltech and knew one of the grad students working on this project. I remember him telling me that Julia is a complete mess (don't recall why he disliked it). I would imagine it feels more "modern" than Fortran, but that doesn't necessarily mean it's better per se.


There’s a long tail of highly optimized library code written in FORTRAN.

Many high performance numerical computing libraries are “using FORTRAN” in the sense that they’re linking binaries compiled from FORTRAN for numerical linear algebra functionality. Cf BLAS (Basic Linear Algebra Subprograms).


The best BLASes (BLIS, openBLAS, and MKL) have lots of C nowadays. IMO it is actually one of their weird cases where C makes more sense: you are mostly tuning GEMM and TRSM, so you can actually throw an unlimited amount of expert C coder effort at them.

But most of the ecosystem other than BLAS doesn’t share this property.


Other than the original reference implementation, you would not find any other fortran version of BLAS anymore. And nobody links to the reference for performance code. All written in either close to metal C with SIMD etc. or straight in assembly altogether. LAPACK is mostly fortran but there are also libraries slowly rewriting those in Rust/Julia and C.


I saw a job offer wanting primarily fortran recently. It was from a government weather service, so I guess fluid dynamics/weather prediction


The list of the most popular Fortran repositories on GitHub:

https://play.clickhouse.com/play?user=play#U0VMRUNUICdodHRwc...

WRF is one of the most notable...


Let's include anything using numpy in there. It's not a huge portion of the codebase (anymore), but it's still there!


Same old same old. Nobody is starting new Fortran projects outside of the domains where it has been used for decades.


Anything that uses SciPy, so software defined radio too via GNU Radio.


- everyone using numpy


- bioinformatics


Just in time for Advent of Code.


Unfortunately I don't think the compilers will add it in time.


Is anyone aware of a nice resource that teaches the language while also showing its evolution at each step?

(as opposed to just teaching the latest version only / an older version only)


The Metcalf books, collectively.


Recent and related:

Fortran - https://news.ycombinator.com/item?id=37291504 - Aug 2023 (223 comments)


Numpy probably calls Fortran and C libraries under the hood.

Is Fortran still considerably faster than Numpy?


No, NumPy has tools for Fortran inclusion, f2py, for the users but itself has no Fortran code. SciPy has Fortran77 code about 15% but slowly rewriting it in Cython/C++.

Both use BLAS from OpenBLAS library externally, that is mostly Assembly/C for BLAS.

NumPy use a flavor of LAPACK that is called LAPACK Lite, written in C, SciPy vendors LAPACK that comes with OpenBLAS if you are pip installing, or if you are using conda, it comes with MKL library from Intel.


The passage of data from python -> fotran -> python takes some time.

Right now, I am converting some numpy code to rust for exactly this sort of speed advantage. I barely know any rust, and don't know the first thing about writing optimized rust code, but I am already getting constant factor improvements.


The overhead of a function call should be O(1) (I mean it must be, right?), and then most of the functions are probably O(N), where N is the size of the vector, at least.

So, as long as most of the work is in the C or Fortran library, it should be about the same.


But constant factors matter. We don't expect there to be a difference in the big-O complexity between languages. If you have two O(N) implementations, but one is 10x faster than the other (or 1.5x or whatever), that's still a significant difference.


the problem is that O(1) here can be hundreds of nanoseconds which has a pretty notable impact for arrays shorter than a few thousand


It depends on how much conversion is needed to feed it into the FFI. Duck typing in python means there are a lot of implicit conversions that will take place.


True. But if the code is reasonably well written, and you are just doing Numpy operations on Numpy vectors and matrices, it should basically be equivalent to doing those operations in Fortran (because most of the actual compute should be performed by code written in Fortran and C).

They asked if Fortran was faster than Numpy, not if it was faster than Python.


I think yes, numpy uses fortran.


Ok. but can I create full stack webapps using Frontend?


Absolutely wild to me that you need to buy access to a digital PDF of a spec for a programming language in 2023.


It's kind of the default for ISO standards, not just for programming languages. Ada is - to my knowledge - the exception, the standard document and the rationale are available free of charge.

However, the draft documents are usually available for free, and the difference between the final draft and the official standard is usually miniscule (typos, punctuation errors, and so forth). You can probably find it on the working group's web site.


Yup, this. The final version the committee actually works on is free for C, C++ and Fortran. After that ISO has it. I’m honestly not sure I would trust the final as much as the final draft. You can also build the c++ spec from the latex for yourself, the repo is public.


> However, the draft documents are usually available for free

Unfortunately, not for SQL :(


It's the default but are these orgs really getting significant funding from that kind of end-user hostile behavior? (And some drafts are available and sufficient but not in all fields)


I don't know a lot about standards organisations. But the people who actually create the standards are not getting paid by ISO. So I don't know what expenses ISO has to cover other than the cost of publishing the standard itself, which in digital form should be very low.

In a corporate setting it might not be as big an issue, but as a private citizen who wants to read a standard document for educational purposes, paying upward of 100 bucks is fairly expensive, so this makes me a bit grumpy. I completely agree with your sentiment.



Thank you very much!


That's how the ISO works. C++ is no different.

I guess the theory is that only compiler developers need a copy of the standard. Everyone else should rely on their compiler manual


Nobody needs or cares about the "real" document. It's a MacGuffin at this point.

STL (Microsoft's ironically named STL guy) pointed out in a thread about whether there's a difference between the ISO document and the draft that even Microsoft's compiler devs just use the draft.


Do you have a link to that thread?



infrastructure is a non-zero cost _and_ knowing that it's "authentic" is non-zero cost also.


The pdf of the final draft is available for free.



Stupid ISO charging money for standards.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: