IME, code on prod rarely ships with a sourcemap. and as far as i can tell, GP is talking about other people's minified code, not something where they control the build process
R lets you do macro-ish things at at runtime [1], and `.` is a valid variable name [2]. so %>% can just evaluate the AST of its right argument in an environment with `. = a`.
(it's probably a bit more involved because %>% also supports
a %>% f(b) === f(a, b)
in which case %>% has to do some AST surgery to splice another argument in front of `b`.)
tbh i haven't checked, but isn't the transpilation step for typescript just stripping out the type annotations? TSC is build-time only, so it won't factor into your bundle size, and i can't imagine the generated JS is significantly bigger than source -- if anything, it should be smaller ;p
For comparison, ClojureScript, another language that compiles to JavaScript, adds to the bundle size, but in my experience, it's mostly because of the runtime (core functions and data types) because the Google Closure compiler optimizes the initial transpiled JS.
I would guess running the TypeScript output through the Google Closure compiler (or adding similar optimizations directly to the TypeScript compiler) could make TypeScript bundles comparable to JavaScript bundles or smaller. It looks like people see significant improvements running TypeScript output through Closure compared to just WebPack [0].
It definitely adds to the bundle size, but how much it adds depends on your implementation. This discussion on the topic is pretty interesting [0]. One comment from that discussion says “it is possible to double to the size of your source code”.
i would recommend getting comfortable with doing stuff with base R, then trying tidyverse. Starting with dplyr might get you results quick, but its "special evaluation" actively confuses your understanding of how the base language actually works (speaking from experience with an R course and subsequently helping other confused folks)
Where'd `height` and `gender` come from in the dplyr version?
They're just columns in a DF, not variables, and yet they act like variables...
Well that's the dplyr magic baby!
dplyr (and other tidystuff) achieves this "niceness" by doing a whole bunch of what amounts to gnarly metaprogramming[1] -- that example was taken from a whole big chapter about "Tidy evalutation", describing how it does all this quote()-ing and eval()-ing under the hood to make the "nicer" version work. it's (arguably) more pleasant to read and write, but much harder to actually understand -- "easy, but not simple", to paraphrase a slightly tired phrase.
---
[1] IIRC it works something like this. the expressions
height < 200
gender == "male"
are actually passed to `filter` as unevaluated ASTs (think lisp's `quote`), and then evaluated in a specially constructed environment with added variables like `height` and `gender` corresponding to your dataframe's columns. IIRC this means it can do some cool things like run on an SQL backend (similar to C#'s LINQ), but it's not somthing i'd expose a beginner to.
My experience is that this weird evaluation order stuff is only confusing for students with a lot of programming experience who already expect nice lexical scope. For those coming in from Excel, the tidyverse conventions are no problem and are in fact easier than all the pedantic quoting you have to do in something like Pandas. It only gets confusing when you want to write new tidyverse functions, and even then, base R isn’t any simpler: the confusing evaluation order is built into R itself at the deepest level.
EDIT: i gotta admit, you sound like you've got more experience with teaching R than me. so perhaps my opinions here are a bit strong for what they're based on, i.e. tutoring a couple of non-programmer friends and my own learning process. still...
> My experience is that this weird evaluation order stuff is only confusing for students with a lot of programming experience who already expect nice lexical scope
fair point, but for the most part, R itself does use pretty standard lexical scoping unless you opt into "non-standard evaluation" by using `substitute`[1]. so building a mental model of lexical scoping and "standard evaluation" is a pretty important thing to learn. after that, the student can see how quoting can "break" it, or at least be able to understand a sentence like "you know how evaluation usually works? this is different! but don't worry about it too much for now". and i think dropping someone new straight into tidyverse stuff gets in the way of this process.
> and even then, base R isn’t any simpler: the confusing evaluation order is built into R itself at the deepest level.
i mean, quoting can't really work without being deeply integrated into the language, can it? besides:
- AFAICT base R data manipulation functions don't use it a lot. [2]
- for the most part, R's evaluation order can be ignored (at a certain learning stage) because it's not observable if you stick to pure stuff, which you probably should anyway.
yeah, the intersection of lazy evaluation and side-effects (incl. errors/exceptions) gets confusing, you definitely have to be there to help the student out of a jam. but i think it's useful to start out pretending R follows strict evaluation (because it's natural[1]) and then, once the student gets their bearings, you can introduce laziness.
---
[1] well, not "natural", but aligned with how math stuff is usually taught/done. in most cases, when asked to evaluate `f(x+1)`, you first do `x+1` and then take `f(_)` of that.
tldr: basically, R passes all function arguments as bundles of `(expr_ast, env)` [called "promises"]. normally, they get evaluated upon first use, but you can also access the AST and mess around with it. AFAIK this is called an "Fexpr" in the LISP world.
(originally i had a nice summary, but my phone died mid-writing and i'm not typing all that again, sorry!)
it's very powerful (at the cost of being slow and, i imagine, impossible to optimize). it enables lots of little DSLs everywhere - e.g. lm() from stats, aes() from ggplot2, any dplyr function - which can be both a blessing and a curse.
x = match y:
case 0: "zero"
case 1: "one"
case x:
y = f(x)
y+y
print(x)
it can require semicolons or parentheses or even braces somewhere if that's necessary to make the grammar work, idc. just don't make me type out `x = ...` a hundred times...
(yes, i know you could use the walrus operator in the third case)
(also, if anyone wants to reply with "use a named function because READABILITY", please save it)
Judging by the votes you got, your proposal is not well-liked.
In general, breaking Python's syntax rules to introduce block expressions or what you want to call it seems like a steep price to pay for what amounts to syntactic sugar in the end.
However, your proposal actually misses the mark in the same way as the actual match/case does as well. What would be /really/ handy in Python is something like
{'some_key': bind_value_to_this_name} = a_mapping
because it is so common, especially if you're consuming a JSON API. You of course see the myriad problems with the above.
I think you should read the PEPs on match/case, these things have actually been considered. One idea was to introduce a keyword to say which variables are inputs and which are outputs, but that also violates the general grammar in a way that isn't very nice.
The accepted solution has warts, I agree, but at some point you just have to accept that you can't reach some perfect solution.
> Judging by the votes you got, your proposal is not well-liked.
i'm at peace with that :) [1]
> In general, breaking Python's syntax rules to introduce block expressions or what you want to call it seems like a steep price to pay for what amounts to syntactic sugar in the end.
that's your opinion, i disagree! i think Python would be a better, more pleasant to use language if they figured this out. a girl can dream, ok?
> What would be /really/ handy in Python is something like
{'some_key': bind_value_to_this_name} = a_mapping
yes, extending the destructuring syntax would be nice, i agree! but the original question was "Please suggest a pattern matching expression syntax [...]", and the post you were responding to was specifically talking about `match`'s syntax. [2]
> I think you should read the PEPs on match/case
i read them when they came out. i'm assuming you're referring to this section from PEP 622:
> "In most other languages pattern matching is represented by an expression, not statement. But making it an expression would be inconsistent with other syntactic choices in Python."
i believe Python's statement-orientatation kinda sucks, so to me this is just "things aren't great, let's stay consistent with that". yeah, yeah, "use a different language if you don't like it" etc.
---
[1] well, maybe not quite, after all i'm here arguing about it.
[2] `match` uses patterns like the one you described in `case` arms, but is distinct from them. i don't see why dict-destructuring syntax couldn't be added in a separate PEP.
I'm interested in similar proposals that can provide the basis of some common syntax that could be transpiled to other functional programming languages https://github.com/adsharma/py2many
I'm less interested in the syntax (will take anything accepted by a modified ast module), more in the semantics.
Here's a syntax I played with in the past:
def area(s: Shape):
pmatch(s):
Circle(c):
pi * c.r * c.r
Rectangle(w, h):
w * h
;;
def times10(i):
x = pmatch(i):
1:
10
2:
20
;;
return x
Two notes:
* Had to be pmatch, since match is more likely to run into namespace collision with existing code.
* The delimiter `;;` at the end was necessary to avoid ambiguity in the grammar for the old parser. Perhaps things are different with the new PEG parser.
iirc for some reason the amount of antibodies you have falls sharply after 1-3 months and you're vulnerable again. anecdotally, a friend of a friend (EMT) had covid like 4 times before vaccines were available.
Antibody count falls at first, but relevant antibody-producing cells were found[0] to remain in the bone marrow. I suppose it is not the only determining factor in susceptibility to symptomatic infection, though.
I’d be curious if currently available vaccines stimulate such preexisting antibody-producing bone marrow cells of past COVID patients (which sounds more optimal on the face of it), or they trigger yet another variety of antibodies to be produced in addition to the “native” one.
> why `m a -> (a -> m b) -> m b` but not something else?
it's basically continuation-passing-style (`a -> m b` is the "continuation"), you might as well ask "why can you represent so many control-flow things using CPS?". idk why, but you can!
from another angle, you could compare Monad with the less powerful Applicative. a formulation[0] that's easier to parse than the usual one[1] is:
class Functor f => Applicative f where
unit :: f ()
pair :: f a -> f b -> f (a, b)
if you're familiar with JS Promises, a rough analogy would be
a detail that doesn't contradict anything you said but may be useful for someone unfamiliar with it:
"avoid success at all costs" is usually meant to be parsed as "avoid [success at all costs]" i.e. don't compromise the language just to make it more popular.
from a look at the readme, you combine those `$.TYPE` things to build a validation function that checks if its argument matches some pattern (and throws an exception if it doesn't).
import * as $ from '@appliedblockchain/assert-combinators'
const validateFooBar = (
$.object({
foo: $.string,
bar: $.boolean
})
)
// probably roughly equivalent to
/*
const validateFooBar = (x) => {
console.assert(
typeof x === 'object' &&
typeof x.foo === 'string' &&
typeof x.bar === 'boolean'
)
return x
}
*/
const test1 = { foo: "abc", bar: false }
const test2 = { foo: 0, quux: true }
const { foo, bar } = validateFooBar(test1) // ok
const oops = validateFooBar(test2) // throws error
the source is pretty readable too if you want to get an idea how it works.