SEEKING WORK | Norway | Remote | Contract (full-time or part-time)
Experienced software engineer specialising in backend development with a proven track record. Over ten years of industry experience, delivering exceptional results to drive projects forward.
What sets me apart:
- Broad expertise: Projects and technologies include data integration, Intel SGX, consensus protocols, REST APIs, and web development. Proficient in C, C++, CSS, Docker, ES6+, express.js, Java, JavaScript, Kotlin, LDAP, Linux, Neo4j, nginx, Node.js, PHP, PL/SQL, Postfix, React, TypeScript, Xen, and (X)HTML5, I possess a versatile skill set.
Why choose me:
- Strong problem-solving skills: Thrive on challenging problems, finding creative solutions. Excel in optimizing performance, designing scalable architectures and resolving complex technical issues.
Expertise in Identity and Access Management (IAM), security, and data integration. Deep understanding and practical experience to deliver secure and seamless solutions. Open to exploring new challenges and technologies beyond these areas.
Available for full-time, part-time, and consulting engagements. Let's connect and discuss how I can contribute to your success.
Location: Trøndelag, Norway
Remote: Yes (remote only, unless within Trøndelag or occasional meetups within Scandinavia)
SEEKING WORK | Norway | Remote | Contract (full-time or part-time)
Experienced software engineer specialising in backend development with a proven track record. Over ten years of industry experience, delivering exceptional results to drive projects forward.
What sets me apart:
- Broad expertise: Projects and technologies include data integration, Intel SGX, consensus protocols, REST APIs, and web development. Proficient in C, C++, CSS, Docker, ES6+, express.js, Java, JavaScript, Kotlin, LDAP, Linux, Neo4j, nginx, Node.js, PHP, PL/SQL, Postfix, React, TypeScript, Xen, and (X)HTML5, I possess a versatile skill set.
Why choose me:
- Strong problem-solving skills: Thrive on challenging problems, finding creative solutions. Excel in optimizing performance, designing scalable architectures and resolving complex technical issues.
Expertise in Identity and Access Management (IAM), security, and data integration. Deep understanding and practical experience to deliver secure and seamless solutions. Open to exploring new challenges and technologies beyond these areas.
Available for full-time, part-time, and consulting engagements. Let's connect and discuss how I can contribute to your success.
Location: Trøndelag, Norway
Remote: Yes (remote only, unless within Trøndelag or occasional meetups within Scandinavia)
> Many valuable, high-quality pieces of content that people would find useful never make it into Google's index.
I see your point in the light of the article (not indexed = not visible), but it feels like the things that _do_ make it need to follow very particular content and style patterns to rank high.
Anecdotally, this observation comes from searching for any term and seeing the results: they are usually similar-looking plausible-looking-but-actually-low-quality results that seem to follow the same or similar structure and have the same content. This does indeed limit the diversity and depth of information, but I'm not so sure it reduces spam, as these low-quality sites seem to be as prevalent as ever before, if not more.
From experience writing articles to a small tech blog, this means that it's quite difficult to get well-researched articles to rank well, even if they're indexed.
For example, I've written an article on how to block hotlinking (I've just checked, and Google says it's indexed). If you search for this, my article on a not-so-well-known blog is nowhere to be found(*)(**), and this is somewhat expected, for a myriad of reasons. The problem isn't that my post doesn't rank, but rather that none of the top-ranking (or even not-so-top-ranking) results are wrong. They are either about how to do this on cPanel or whatever, which is ineffective (but granted, could be what people are looking for), or instructions using the `Referer` header, which is ineffective.
These days, browsers offer headers like `Cross-Origin-Resource-Policy` which can completely solve the particular issue of hotlinking, unlike `Referer` which is easily bypassed using `referrerpolicy="no-referrer"`. However, because most 'authorities' seem to be wrong on this issue, the correct result isn't displayed, because it's a hard problem to solve algorithmically (or even manually).
(*) This doesn't affect just Google, though.
(**) Because it's indexed, adding the right keywords (which you wouldn't do in this case unless you already knew the answer) does bring it up, although from federated high-authority sites instead of the original canonical source.
Interesting approach! As an author of another JS sandbox library[1] that uses workers for isolation plus some JS environment sanitisation techniques, I think that interpreting JS (so, JS-in-JS, or as in this case, JS-in-WASM) gives you the highest level of isolation, and also doesn't directly expose you to bugs in the host JS virtual machine itself. Since you're targeting Node, this is perhaps even more important because (some newer developments notwithstanding) Node.js doesn't really seem to have been designed with isolation and sandboxing in mind (unlike, say, Deno).
From the API, I don't see if `createRuntime` allows you to define calls to the host environment (other than for `fetch`). This would be quite a useful feature, especially because you could use it to restrict communication with the outside world in a controlled way, without it being an all-or-nothing proposition.
Likewise, it doesn't seem to support the browser (at least, running a quick check with esm.sh). I think that that could be a useful feature too.
I'll run some tests as I'm curious what the overhead is in this case, but like I said, this sounds like a pretty solid approach.
The quickjs interpreter C code counts the bytes it's allocated, and refuses to allocate more if over the limit. It decrements by the allocation size when freed. This malloc function is used everywhere the interpreter allocates memory:
static void *js_def_malloc(JSMallocState *s, size_t size)
{
void *ptr;
/* Do not allocate zero bytes: behavior is platform dependent */
assert(size != 0);
if (unlikely(s->malloc_size + size > s->malloc_limit))
return NULL;
ptr = malloc(size);
if (!ptr)
return NULL;
s->malloc_count++;
s->malloc_size += js_def_malloc_usable_size(ptr) + MALLOC_OVERHEAD;
return ptr;
}
Exploring Proof of Work (PoW) as a substitute for CAPTCHAs is an interesting idea (PoW was originally conceived as a spam deterrent, after all), and one that I have considered (and use) in some web properties I manage. Not only does it obviate 'trusted' third parties, but it also has the potential to reduce the risk of accessibility issues often associated with traditional CAPTCHA. It also seems like a solution that scales nicely, as each 'proof' is made by the client and verification is cheap, and like a solution that finally ends the arms race against malicious traffic by bypassing the need to 'prove humanity'.
However, it's one of those solutions that look good on paper, but upon close inspection break down entirely or come with rather substantial tradeoffs. Ignore the environmental discussion about energy consumption for a moment, and let's face the reality that computational power is ridiculously inexpensive.
As a thought exercise, imagine you're trying to use PoW to ward off spammers (or the attack du jour), and you decide that a 1-cent expenditure on computation would be a sufficient deterrent. Let's say that renting a server costs $100/month (a bit on the higher end), or 0.004 cents per second.
So, if you wanted a PoW system that would cost the spammer 1 cent, you'd need to come up with a computational task that takes about 250 seconds, or over 4 minutes, to solve. That kind of latency just isn't practical in real-world applications. And that ignores that 1 cent is probably a ridiculously low price for protecting anything valuable.
Of course, you may consider this as an alternative to regular CAPTCHA services. A quick search gives me that this costs something like $3 for 1000 CAPTCHAs solved, or 0.3 cents per CAPTCHA. This changes the above calculation to about 1 minute of compute, which still seems rather unacceptable considering that you might, e.g., drain your users' battery.
So, overall, while I'd like for something like this to work, it probably only acts as a deterrent against attackers not running a full browser and who also aren't targeting you in particular.
That makes no difference, you'd have to scale the challenge to many minutes as GP explained, which is not something any user will go through. What's the point of issuing a challenge only spammers will pass?
Indeed, although undocumented, this is implemented and is a PAKE variant ([1] and [2]).
The way that it works, at a high level, is similar to how SRP works. Two random salts are generated (let's say, A and B), where A is used for authentication (and hence public) and B is used for deriving the other cryptographic keys.
When you authenticate, you retrieve A and then you prove to the server that you know what scrypt(A, password) is. At this point, the server provides you with B, and you can use this information to derive scrypt(B, password), which in turn you use to derive other cryptographic keys.
It being an oblivious password store, there are other steps taken to make the protocol stateless (from the perspective of the server) and to make parties commit to random values used so that runs of the protocol cannot be replayed.
> but the password database itself is still vulnerable to dictionary attacks once it’s stolen
This is correct, and I'm not sure there are good ways to prevent this or equivalent scenarios from occurring. Therefore, you should see this as an additional layer on top of your already secure password and not as a substitute for a secure password.
The reason for having this mechanism rather than not having it is to protect your password from brute-forcing by the public at large, in a scenario where the server operator is semi-trusted. Without this implementation, you have three alternatives: (1) Forego passwords entirely; (2) make the salted password 'public', along with the salt, which is all the information you need to brute-force it or (3) make more "normal" authentication flows without PAKE, in which case you need to trust the server even more. If you insist on using passwords, this is a compromise solution between (2) and (3), i.e., between anyone can break it and trust the server entirely.
> This is correct, and I'm not sure there are good ways to prevent this or equivalent scenarios from occurring.
I believe there isn’t indeed, sorry this part came out as a criticism.
I actually have deployed a PAKE at work for a corporate CRUD app once, and the entire security hinged on login/password. Clients authenticate to the server with the password, server authenticates to the clients with its database entry.
Sure the password could be brute forced if the databased leaked, and sure anyone could impersonate the server with a database entry, but this reduced security allowed simpler and more convenient administration: no need to bother with a PKI, just take good care of the password database and reset everyone’s passwords when we suspect a leak.
(Now if this was for the wider internet I would have added a PKI layer on top to prevent server impersonation.)
> The reason for having this mechanism […]
Yeah, PAKE is real nice. Ideally every password based login system would use augmented PAKE under the hood. Not only does it protect the passwords better, the protocol itself doesn’t need to happen in a secure channel (this can help reduce round trips), and the bulk of the computation (slow password hashing) happens on the client side. This reduces both network load and server load, what’s not to like?
What do you mean by replacing trust with crypto? Like in 'code is law'? If so, yeah, you can't replace one thing with the other because they're fundamentally different things that may only overlap in certain areas.
But on a broader scale, I don't see what in cryptography makes power inherently more concentrated. Crypto is just a way for enforcing certain trust relations that have already been established or agreed upon. Just like you can use crypto to help centralise power (e.g., allowing you to only run signed applications that can only show signed content), you can use crypto to help decentralise power with tools for confidentially presenting content and allowing you to vet your applications haven't been tampered with.
In both cases the underlying technology has many common components, and what changes is the use you make of it.
> you cant compute yourself out of a broken world.
Most certainly not, but surely you can build better tools with the aspiration of facilitating certain goals, can't you? It's not the tools in or by themselves that will improve (or worsen) the world, rather something at your disposal to pursue your goals.
> the myth that the computer will lead to a better, more equal society
Agreed that it won't. But, IMO, the strength of Shelter is that it covers a niche that many other systems (blockchain-y or otherwise) don't, which is data autonomy and confidentiality. Most popular web apps today are centralised silos that don't give you privacy from the operator, and those that aim for federation often also don't give you much privacy either.
Now, it can be that those factors are not important for the specific thing you're developing, and that's fine. But, if they are, having an existing framework to build on top of can give you a head start (even indirectly, by showing you what works or doesn't).
Disclaimer: I'm involved in the development of Shelter. All opinions are my own.
I can't seem to find an old article written on the early days of OAuth 2.0, praising OAuth 1.0a, because among other things it signed the URI parameters and because unlike OAuth 2.0 Bearer tokens, OAuth 1.0a didn't require sending credentials in the clear (this was at a time when HTTPS wasn't quite as ubiquitous, and OAuth 2.0 pretty requires TLS to be used securely).
As someone often working with OAuth 2.0 flows, I enjoyed the article and think that it raises many good points. I'd also say that many of them come from things that affect _any_ system solving a problem similar to OAuth 2.0, because authorisation is hard to get right, or from extensions to the protocol that really aren't OAuth 2.0's fault (like the `realmID` parameter, obviously added to make the life of those API developers easier at the expense of those actually trying to integrate with their systems).
To me though, I wholeheartedly agree with 'Problem 1: The OAuth standard is just too big and complex' and 'Problem 2: Everybody’s OAuth is different in subtle ways'. OAuth 2.0 is more of a framework or metastandard, and no API implementation uses all parts of it because they simply are not relevant to that API or use case. This alone makes it quite hard to 'simply use OAuth' for an integration, because a big part of the job is figuring out which parts are used and in which ways, even if everything is done per the RFCs.
By contrast, OAuth 1.0a was comparatively much simpler and focused on a more narrow problem. OAuth 2.0 allows you to convert a SAML claim from one provider into an OAuth 2.0 token for a different provider to then delegate permissions conditionally to another actor for a particular action on yet another API.
Are we better off with OAuth 2.0? I say yes, because figuring out the differences between providers is probably easier than realising a hundred completely different implementations that have very different ideas of what an authorisation or delegation flow should look like. I think that one can learn to reason about OAuth 2.0 and then apply this logic to integration jobs with slightly less cognitive load than a completely bespoke solution.
At the same time, I think something sorely needed is something like OAuth 2.0 profiles that standardise the features used to integrate with OAuth 2.0. Probably most social media sites have similar requirements, most auth-as-a-service have similar requirements and so on. Having a common subset of features and decisions for common use cases and scenarios would IMO greatly simplify integration tasks and, paired with choosing a good library, make it indeed possible to integrate with a random service in under an hour.
The thing is that some of that was the spirit of older standards like OAuth 1.0a and OpenID (not to be confused with the newer OpenID Connect, which is OAuth 2.0-based), and the world seems to have moved away from that, probably because of the flexibility that OAuth 2.0 affords and the want to tightly control authorisation and external integrations.
I was thinking the same, but according to [1] some parameters like the hostname are hard-coded. So, that might be why you need to make the code public on GitHub (although still, technically you can still comply with the AGPL with a private repository).
Experienced software engineer specialising in backend development with a proven track record. Over ten years of industry experience, delivering exceptional results to drive projects forward.
What sets me apart:
- Broad expertise: Projects and technologies include data integration, Intel SGX, consensus protocols, REST APIs, and web development. Proficient in C, C++, CSS, Docker, ES6+, express.js, Java, JavaScript, Kotlin, LDAP, Linux, Neo4j, nginx, Node.js, PHP, PL/SQL, Postfix, React, TypeScript, Xen, and (X)HTML5, I possess a versatile skill set.
Why choose me:
- Strong problem-solving skills: Thrive on challenging problems, finding creative solutions. Excel in optimizing performance, designing scalable architectures and resolving complex technical issues.
Expertise in Identity and Access Management (IAM), security, and data integration. Deep understanding and practical experience to deliver secure and seamless solutions. Open to exploring new challenges and technologies beyond these areas.
Available for full-time, part-time, and consulting engagements. Let's connect and discuss how I can contribute to your success.
Location: Trøndelag, Norway
Remote: Yes (remote only, unless within Trøndelag or occasional meetups within Scandinavia)
Willing to relocate: No
Résumé/CV: https://riv.ar/curriculum-vitae/
Email: hn-u5cgNWJM(-at-)protonmail.com
GitHub: https://github.com/corrideat, https://github.com/ApelegHQ