Lem has got to be one of the most profound writers in all science fiction--dare I say in all of literature? He's up there with writers like Wells and Asimov.
Solaris is deservedly his most famous book, but he has many other books worthy of attention. His Master's Voice is a novel in the vein of the Carl Sagan's Contact, but with far more complex philosophical and scientific underpinnings than Sagan's work. Eden and Fiasco are both great as well, even if they fall short of Solaris.
His comedic books never appealed to me quite as much, although there are parts of The Cyberiad with some interesting ideas.
I'm interested in his Summa Technologiae--last time I checked it didn't have an English translation. Has anyone here read it?
TLDR: if you aim for 100x and achieve a 3x, that won't affect my fund much financially but I'll be very happy for you and your team. But if you initially aim for a 3x then I can't invest in the first place based on my portfolio model.
For a good VC fund, a typical portfolio might look something like 30 investments -> 11 fail, 10 return 1x, 6 return 5x, 2 return 15x, and 1 returns 50x. That's a 4x return overall, but most of that return is concentrated in the top 10% of companies. So the investing model only works if a VC fund keeps swinging for the 50x return and occasionally succeeds.
The way I would frame exits from my perspective is that I need to keep investing in companies with 50x potential so that hopefully at least one will reach that level during a fund's lifecycle. But along that path, a lot of companies will predictably exit for 5x or 1x or 0x. Those exits don't have a huge impact on my fund's financial returns, but the companies are still run by people who I like and believe in, and the ideas those people worked on are ones that we (and their employees) believed in. So if a company exits for 0x everyone is bummed on behalf of the founders and their team who went all-in. But if the founders aimed at 50x and came out with 2x I am very happy for them personally. I don't get upset at all that not every exit is a 50x because that would be like always betting on 17 in roulette and being upset every time a different number pops up. That's not how the statistical distribution works. What would bum me out is if someone is clearly on track for a 10x or 50x outcome, but then sells at 2x or 4x because that's good enough for them.
As a hungarian, I'm pretty surprised by this article, since the quality of hungarian education is getting worse and worse, thanks to the "reforms", which basically means they are spending less and less on education on all level. And however in the past, math education was world class, now the "old school" teachers are retiring and other than a handful of elite schools (Fazekas, Eötvös, Radnóti etc) most of the schools are way below the european average.
I'm not the OP, but I'm from another Eastern Europe country where education level falls fast and dramatically. Few key indicators of that:
0. I was attending an elite gymnasium and we deliberately used old "hard science" (math, physics, chemistry) textbooks from 70'-80', because they were much more advance level than modern ones. Most of math/physics taught during first/second year at university now, was taught in 11-12th grade back then.
1. Anecdotally, but all teachers complain that every year students are getting less motivated, performing poorer, having shorter attention spans. With local "no children left behind" equivalent it's enough just to attend some percentage of classes to get passable grades.
2. Every few years some kind of "reform" is performed to reduce the difficulty of final exams to maintain failure rates at bay and keep statistics nice.
3. Combined points 1 with 2 this leads to universities full of students that have neither motivation nor skills to perform there. Since such students make up majority in most less popular programs, the difficulty level is also reduced to adapt.
4. Degrading higher level education quality reflects on its prestige as potential employees no longer trust universities to produce prospective candidates. This closes the loop back to point 1 as current pupils deem education "worthless".
5. Fun fact: entrance exams to non-prestigious universities from 70'-80' were (maybe still are) widely used as assignments in national level competitions when I was at school (late nineties to early 00'). To paraphrase that -- a skill level that once was expected from anybody who wanted to be accepted in university now puts you at the very top.
If Experian were really sneaky they'd pay teams of hackers to steal personal data from businesses so the affected businesses would then have to buy huge contracts for free credit monitoring for their users from Experian. That's not what's happening here, obviously, but it'd make a great Hackers-esque movie.
(3) As you work for clients, keep a sharp eye for opportunities to build "specialty practices". If you get to work on a project involving Mongodb, spend some extra time and effort to get Mongodb under your belt. If you get a project for a law firm, spend some extra time thinking about how to develop applications that deal with contracts or boilerplates or PDF generation or document management.
(4) Raise your rates.
(5) Start refusing hourly-rate projects. Your new minimum billable increment is a day.
(6) Take end-to-end responsibility for the business objectives of whatever you build. This sounds fuzzy, like, "be able to talk in a board room", but it isn't! It's mechanically simple and you can do it immediately: Stop counting hours and days. Stop pushing back when your client changes scope. Your remedy for clients who abuse your flexibility with regards to scope is "stop working with that client". Some of your best clients will be abusive and you won't have that remedy. Oh well! Note: you are now a consultant.
(7) Hire one person at a reasonable salary. You are now responsible for their payroll and benefits. If you don't book enough work to pay both your take-home and their salary, you don't eat. In return: they don't get an automatic percentage of all the revenue of the company, nor does their salary automatically scale with your bill rate.
(8) You are now "senior" or "principal". Raise your rates.
(9) Generalize out from your specialties: Mongodb -> NoSQL -> highly scalable backends. Document management -> secure contract management.
(10) Raise your rates.
(11) You are now a top-tier consulting group compared to most of the market. Market yourself as such. Also: your rates are too low by probably about 40-60%.
Try to get it through your head: people who can simultaneously (a) crank out code (or arrange to have code cranked out) and (b) take responsibility for the business outcome of the problems that code is supposed to solve --- people who can speak both tech and biz --- are exceptionally rare. They shouldn't be; the language of business is mostly just elementary customer service, of the kind taught to entry level clerks at Nordstrom's. But they are, so if you can do that, raise your rates.
David Foster Wallace wrote about the relationship between Standard Written English and social power structures in America. See "Authority and American Usage" or its reprint in Harper's, "Tense Present".
For example, a popular German radio moderator and podcaster once said:
"If I want to learn something about a topic, and ask a question on air, almost nobody answers. But if instead I make a false (or naive) claim about that topic, I receive lots of replies correcting me, and might even get an interview with an actual expert on that topic."
I have four kids...two in college now...the other two are on track to be in a few years...I get compliments on them all the time...
The number one piece of advice I can give you is to suspend any preconceptions you have, consciously or unconsciously, about the kind of person they will be...don't force them into some mold you have in mind for them...they will be separate and distinct beings from birth...treat them as such with as much love and guidance as you can muster...let them breathe...
Also:
Expose them to as many things as possible, and let them choose the things that they want to follow up on...
Model good behavior, so they will see what good behavior looks like...point out good behavior to them as opportunities arise...
Xmonad is a bit historically related to StumpWM, actually; a number of early users had been using StumpWM for the greater programmability over Ratpoison, but weren't all that keen on Common Lisp, so when dons decided to write a WM in Haskell... I was one of them, and even ported over some of the more popular extensions like search engine plugins (which were themselves originally modeled on 'surfraw', a package of shell scripts written mostly by a fellow named Julian Assange; everything is connected).
The limits on memory access are physical, illustrated by grass hoppers famous video about nanoseconds and the speed of light.
Should computer algorithms always assume they need to model caches since they are never going away? When determining the computational complexity, time and memory are treated with equivalence, but real memory doesn't and never will behave that way.
Radiohead also released an EP of OK Computer B-sides called "Airbag / How Am I Driving?" It's absolutely brilliant, but relatively unknown. If you haven't heard it seek it out.
On X11 you can do better: the clipboard isn't actually stored anywhere, the program requesting the paste gets the data directly from the program that claimed the clipboard.
The clipboard lib you're using just spawns xclip (or xsel) on Unix. xclip has a -loops argument that is the maximum number of pastes. So you could just spawn:
xclip -loops 1 -in -selection clipboard
And the password would be available to paste exactly once. Unfortunately I don't think xsel has any such option.
There are clipboard manager apps that could grab the paste immediately and persist it, but I'm not sure how common they are or if they are default anywhere.
"most common uses are tractable" does not guarantee that many common uses will have been correctly analysed.
As to why it's disallowed: stack space is limited, and tail call optimisation isn't taken for granted. Here's what the Joint Strike Fighter coding standards (thataway -> http://www.stroustrup.com/JSF-AV-rules.pdf) say:
AV Rule 119 (MISRA Rule 70)
Functions shall not call themselves, either directly or indirectly
(i.e. recursion shall not be allowed).
Rationale: Since stack space is not unlimited, stack overflows are possible.
Exception: Recursion will be permitted under the following circumstances:
1. development of SEAL 3 or general purpose software, or
2. it can be proven that adequate resources exist to support the maximum level of
recursion possible.
So yes, you're allowed it if you do that extra work, but given that you can replace tractable recursion with a loop anyway, the win you'd have to get from expressing the problem recursively has to compensate.
It is NOT limited to the external RAM speed and the best proof is that it actually uses over 40GB/s of memory bandwidth.
For each byte of data passing through pv:
1. the byte is read from yes memory to CPU registers by the write syscall handler in the kernel
2. the byte is written to kernel's internal buffer associated with the pipe
3. the byte is read back in the read syscall called by pv
4. the byte is written to a buffer in pv memory
5. and thas's the end because write syscall executed by pv on /dev/null very likely doesn't actually bother reading the submitted buffer at all
edit: Actually it might only be 20GB/s because on Linux pv seems to use the splice syscall to transfer data from stdin to stdout without copying to userspace.
This is also the reason why further "optimization" of this program in assembly was a fool's errand: the bulk of CPU load is in the kernel.
References for you and the other poster who asked:
Peter Gärdenfors: Conceptual Spaces - the Geometry of Thought. MIT Press 2000 (Paperback 2004).
It is very easy reading. The problems of geometric meaning theory are compositionality and quantification - how to get the expressivity of logical representations in addition to nearness measures, fuzziness and so on. There are some interesting approaches:
Martha Lewis & Jonathan Lawry: Hierarchical conceptual spaces for concept combination. Artificial Intelligence 237 (2016): 204-227.
Diederik Aerts, Liane Gabora, Sandro Sozzo: Concepts and their dynamics: a quantum-theoretic modeling of human thought. Topics in Cognitive Science 5 (4) (2013):737-772. [and other work by Aerts]
Aerts work is fascinating me personally, but it's unfortunately above my level of mathematical maturity. This is a general problem in this literature, maybe some solutions are already there but they also need to be sold in a way that allows linguists to understand and use the methods. Montague was lucky (well, not personally, of course), because he had scholars who were able to package his dense ideas in more verbose and easier to access textbooks.
Another short book worth reading in my opinion, though very programmatic in nature:
Jens Erik Fenstad: Grammar, Geometry, & Brain. CSLI Publications 2009.
Attempting to be as accomplished/skilled as the union of people you read on the Internet is a fool's errand. You have to accept you'll never know everything and that, for almost all things, there will be someone -- or a lot of someones -- much better than you.
Pretend you were working at a company with a hundred engineers. Do you understand how easy it is for every single one of them to simultaneously feel like you do? The React mavens feel like they're just knocking together JS and wonder when they'll be allowed to do real engineering. The backend specialists wonder why they don't understand networking or servers better. The DevOps folks envy folks who build things. The American office wonders why they can't speak foreign languages; the German office marvels that anyone can learn Japanese; the Japanese office worries their English isn't up to the global standard.
There's nothing wrong in specialization -- it's how we stay sane. A very workable and easy to understand formula early in your career is specialize in two things; you don't have to be better at X and better at Y than everyone you meet, you have to be "better at X than anyone who is better at Y" and "better at Y than anyone who is better at X." This is very, very achievable, regardless of how highly competent your local set of peers is.
Also, unsolicted advice as a sidenote, but life is too short to spend overly much time in negative work environments. Assuming the negativity isn't coming from you, changing environments to one of the (numerous!) places where happy people do good work might be an improvement.
My take is that Clojure's community is distinct from that of other Lisp communities.
Old school Lispers are the Jacobites of the Computer Age. Their warnings were ignored, so their nightmares came true; and the world has changed so much, as a result, that few can even imagine an alternative. They were right about nearly everything, and now they are irrelevant.
So pour out a drink for the king over the water and for Genera. Curse this present dark age of bureaucrats and Posix. And then MOVE ON. The Clojurists are the first sizable community of Lispers that has done that.
By keeping the code as visible (read, small) as possible, I see more code and can better reason at a macro level. To scale this down into the micro level of dealing with individual compiler passes, I replace all the traditional programming paradigms with others in a sort of 1 for 1 exchange. In this way, I develop a new set of idiomatic programming methods that are so concise, they can begin to be read as we read and chunk English phrases. By doing so, it becomes actually easier to just write out most algorithms, because the normal name for such an algorithm is basically as long as the algorithm itself written out. This means that I start to learn to chunk idioms as phrases and can read code directly, without the cost of name lookup indirection. I can get away with this because I've made reusability and abstraction less important (vastly so) because I can literally see every use case of every idiom on the screen at the same time. It literally would take more time to write the reusable abstraction than it would to just replace the idiomatic code in every place. It's a case of the disposability of code reaching a point that reusability is much less valuable.
This means that in those cases where reuse is valuable, it's very valuable, and it comes to the fore and you can see it as the critical thing that it is. It doesn't get drowned in otherwise petty abstractions that assist reusability, since we don't need that anymore.
Furthermore, if I write my code correctly, there is very, very little boiler plate in the compiler. Almost none. This means that every line is significant. By doing this it means that you don't get the fun of feeling like you're accomplishing something by typing in lots of excess boiler plate, but it does mean that you have no wasted architecture. Because rewriting the architecture is so trivial, basically everything now becomes important, and you don't have petty book keeping code around. You know that everything is important, and there is no superfluous bits.
The result, as mentioned elsewhere, is code that is getting continuously simpler, rather than continuously more complex. The code is getting easier to change over time, not harder. The architecture is getting simpler and more direct and easier to explain. Because it costs so little to re-engineer the compiler, I can do so constantly, resulting in little to no technical debt.
This is an intentional synergistic choice of a host of programming techniques, styles, disciplines, and design choices that enables me to program this way. Give up one of them and you start to break things down. It allows for a highly optimized programming code base that has all of the desirable properties people wish their code bases have, and it scares people. I think that's a good thing. Because I don't want people to see this codebase as just another thing. I want them to see that this is something truly different. How can I get away with no module system? How can I get away with no hierarchy? How can I get away with having everything at the top-level, with almost no nested definitions? How can I get away with writing a compiler that is not only shorter, but fundamentally simpler from a PL standpoint than standard compilers of similar complexity by using only function composition and name binding? How can I get a code base that has more features but continues to shrink?
By chasing smaller code. :-)
I assure you, and I'll make good on this in another reply here, I could get you up and running on understanding the code and how it works faster than just about any other compiler project out there. In the end, one of the goals I want for this compiler is for people to say, "Woah, wait, that's it? That's trivially simple." The more I can push people to think of my compiler as so trivial as to be obvious, the more I win. The compiler really is so dirt simple as to shock any normal compiler writer.
But to make it that simple, I have to do things in ways that people don't expect, because people expect complexity and indirection, they expect unnecessary layers for "safety" and they expect code that needs built in protections because the code is too complex to be obviously correct.
I'm pushing the other direction. If you can see your entire compiler at one go on a standard computer screen, what sort of possibilities does that open up? You can start thinking at the macro level, and simply avoid a whole host of problems because they are obviously wrong at that level. When you aren't afraid to delete you entire compiler and start from scratch? What sort of possibilities does that open up to you?
Haskell has one of the best namespace/module systems of any language I've ever seen. It doesn't 'leak' these names; they're just the names of the data constructors, they're supposed to be visible. If you need to control access, use the module system.
I think it's perfectly legitimate for examples of particular features to omit other features (like modules, in this case).
With the wide adoption of WebGL, it's a good time to get involved in graphics. Furthermore, GPUs are taking over esp. with the advent of machine learning (nvidia stock grew ~3x, amd ~5x last year). The stuff nvidia has been recently doing is kinda crazy. I wouldn't be surprised if in 15 years, instead of AWS, we are using geforce cloud or smth, just because nvidia will have an easier time building a cloud offering than amazon will have building a gpu.
These are some good resources to get started with graphics/games
# WebGL Programming Guide: Interactive 3D Graphics Programming with WebGL
Historically, C++ has definitely been THE language for doing graphics but if you are starting these these, you would have to have really compelling reasons to start with C++ and not JavaScript and WebGL. And that's coming from someone who actually likes C++ and used to write it professionally.
This is more of college textbook if you'd prefer that but the WebGL one is more accessible and less dry.
# Physically Based Rendering & Real-Time Rendering
These discuss some state of the art techniques in computer graphics. I'm not going to claim to have really read them but from what I've seen they are very solid.
Solaris is deservedly his most famous book, but he has many other books worthy of attention. His Master's Voice is a novel in the vein of the Carl Sagan's Contact, but with far more complex philosophical and scientific underpinnings than Sagan's work. Eden and Fiasco are both great as well, even if they fall short of Solaris.
His comedic books never appealed to me quite as much, although there are parts of The Cyberiad with some interesting ideas.
I'm interested in his Summa Technologiae--last time I checked it didn't have an English translation. Has anyone here read it?