In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.
Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.
A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
Keycloak is good software. It never failed for me. Even upgrading from 7.x.x to 16.x.x somehow just worked.
Yes, their docker image is fat, but it's also very flexible. Now that they are basing Keycloak on Quarkus instead on Wildfly, the docker image should shrink in size.
quay.io/keycloak/keycloak 18.0.0 a6bd0f949af0 15 hours ago 562MB
quay.io/keycloak/keycloak 18.0.0-legacy 421e95f49589 46 hours ago 753MB
We're actually working on a new version of the Administration UI at the moment (I'm one of the devs) so this is useful feedback. We're looking for folks to try it out, so take a look at https://github.com/keycloak/keycloak-admin-ui/.
You can try it out on the latest Keycloak by passing the --features=admin2 flag on startup.
We have way too many issues with KeyCloak. Sometimes I wonder why did we integrate this. One of the main issue is when you authorize by Github but cancel the authentication, it redirects to KeyCloak page rather than our login page. Couldn't find any solution yet.
Theming the Administration UI in the new version is a lot harder as it relies more on JavaScript for rendering than FreeMarker templates in the old one. We're keeping the option to use the old interface around until this has been mitigated.
That said, we are now relying on PatternFly (https://www.patternfly.org), which allows quite some customization through CSS variables.
As for the authorization screens, these are out of scope for the changes we're doing here. But they will probably get pulled into their own refactor at some point.
> Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.
> A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
To echo everyone else: the Keycloak documentation does not do a good job of hand-holding you at all, and the number of possible ways you can configure and use the system and the amount of jargon and terminology used is massively overwhelming to someone trying to get started. It would be very helpful to have some "white paper"-esque summaries that walk you through some simple, typical use-cases.
I looked through the docs quickly before making this post and as an example here's a basic task for initial setup ("hook up an IDP", basically giving keycloak its database of users), and it's utterly incomprehensible to any human being who doesn't already know how to work the system and really essentially worthless even then. It's just... reading me the command line options and a couple config files? What do any of those values even mean? This is core functionality for Keycloak, and the documentation consists of "yeah, here's a command line with placeholders and a text file syntax, good luck bitches!".
Honestly I feel like you could do better simply by jumping into the UI and playing with options, it's not entirely unintuitive what's going on in the UI, but the docs are basically incomprehensible.
I actually know of several projects that have pretty much bogged down because of Keycloak configuration or role/privilege mis-configuration issues and it's not hard to see why. It's the turing tar-pit of IDP, everything is possible and nothing is easy (or documented). Which is a shame because it seems like an awesome piece of software, just inscrutible to the un-initiated.
As others are noting, I'm sure some of this is due to OAuth2 being an inscrutable piece of shit in general, same thing, it tries to do everything and it's so un-opinionated that you end up with a bunch of basically incompatible implementations that are each effectively their own "standard" anyway.
(posted this on the wrong child, moving it to the parent)
Disclaimer: Former Red Hatter but worked on OpenShift, not Keycloak
Working as a person providing commercial support for open source projects, I promise it doesn't actually work that way. Incentives are entirely for creating good documentation. Having crappy docs only hurts project adoption for paying and non-paying customers, increases the support burden, and wastes the time of your employees (who are the primary consumers of that documentation).
Usually documentation isn't great because writing (and maintaining!) good documentation is really hard. It's a continual effort and it takes engineer time away from bug fixes and feature dev, two things for which there is never ending demand for.
Edit: Pro-Tip: With Red Hat projects (like Keycloak, OKD, etc) it's always worth looking at the RH product docs as well as "open source" docs. For example if you use OKD, check OpenShift docs as well as OKD docs. You do (unfortunately and I wish they'd remove this) usually have to log in to a Red Hat account but you don't have to pay. You can create a free account and use that.
I can tell you that at least as of late last year, the OpenShift install docs omitted key details for setting it up.
We were unable to do so until contacting RH and getting additional instructions - I forget the all the details, part of it involved creating DNS records mentioned nowhere in the docs.
If you can remember which ones, I'd be interested. I installed OpenShift a dozen or so times using those docs and I don't remember having to add any DNS records that weren't in the docs. I also grepped my notes and don't see anything. That said I do remember the DNS records being on a page that wasn't the one I thought it should be on.
Something I do criticize them for though (this is a problem in broader tech not just Red Hat) is that they are aggressive at culling/cutting old docs. The idea is to keep the docs small and relevant, but unfortunately in my opinion they cut valuable stuff. I always screen grab/print docs at the time in case they get removed because that's been a wide problem.
If you have a Red Hat subscription for OpenShift, then yes.
Although, unless it's high priority (like it's breaking functionality and there's no workaround) it's not usually a quick turnaround because it has to get prioritized and added to a sprint. It can take weeks or months. Although I did have a high-pri item fixed in under a day, so it does happen.
> This is one area where incentives don't align correctly for open source projects that offer commercial support.
This is true in some[0] cases. It's also true that documentation is key source of customer acquisition and retention.
Projects get traction by making it useful out of the box[1] for some use-cases, making it appealing for hackers to config and extend, teasing features the former would pay for and the latter could figure out the buy v build.
Projects that do this well also learn shittons from real-world usage and feedback informing their roadmap and new opportunities to pursue.
[0] True when the perspective is "If we make the docs too good we're losing revenue" / "Everyone using Feature X gratis is a loss in MRR." It's an understandable view that's held widely. It's not often a significant revenue factor in my experience, and ~never when accounting for product and market insights gained by wider adoption.
We're still on the older one and looking forward to the Quarkus improvement specifically for boot times. Even with an empty DB, the old one takes several minutes to load and come up. It's the long pole in our install.
Very happy with KC otherwise. We make heavy use of its nice API to create providers and clients at install time.
I'm litterally about to jump from 8 to 17 this week, so that's good to hear. It seemed seamless on my local setup and was wondering if it was just too good to be true. It's a great piece of software.
You are correct about the documentation. I find the tragedy of open source documentation is that the people who need it most - the novices - are the ones whom could write it best - if they only knew if what they were saying was accurate. And then by the time you become an old-timer, and know thy ways, you just want to wipe your hands and walk away, because your tired....and still not sure if all your knowledge is accurate.
But anyway, once it's all figured out, it runs very reliably.
The base image (registry.access.redhat.com/ubi8-minimal) is about 100 MiB.
ID CREATED CREATED BY SIZE COMMENT
a6bd0f949af01b5680767225c3ac2b428d9b6921a6a9a420f6189f2523931c4c 18 hours ago ENTRYPOINT ["/opt/keycloak/bin/kc.sh"] 0 B buildkit.dockerfile.v0
<missing> 18 hours ago EXPOSE map[8443/tcp:{}] 0 B buildkit.dockerfile.v0
<missing> 18 hours ago EXPOSE map[8080/tcp:{}] 0 B buildkit.dockerfile.v0
<missing> 18 hours ago USER 1000 0 B buildkit.dockerfile.v0
<missing> 18 hours ago RUN /bin/sh -c microdnf update -y && microdnf install -y java-11-openjdk-headless && microdnf clean all && rm -rf /var/cache/yum/* && echo "keycloak:x:0:root" >> /etc/group && echo "keycloak:x:1000:0:keycloak user:/opt/keycloak:/sbin/nologin" >> /etc/passwd # buildkit 272 MB buildkit.dockerfile.v0
<missing> 18 hours ago COPY /opt/keycloak /opt/keycloak # buildkit 192 MB buildkit.dockerfile.v0
1ecf95eda522cf8db84ac321e43a353deea042480ed4e97e02c5290eb53390c3 5 days ago 20.5 kB
<missing> 5 days ago 107 MB Imported from -
For the most part I am also happy with Keycloak, but they could do a far better job documenting things, especially their language adapters. For example the "Readme" for the `keycloak-connect` Node.js package has a link to documentation, but that documentation fails to document anything around the package.
Likewise I had better luck once I understood OpenID and then treating Keycloak as an extension of that. I even ended up writing my own code to deal with the bearer token passed to our API, because I couldn't find anything. If anyone is interested I can share it, but it isn't anything amazing.
Most of my best help came from outside of the Keycloak support groups and instead reaching out to other people who use Keycloak.
> In Keycloak nothing made sense to me until I got myself familiar with OAuth 2.0 and OpenID Connect.
Hot take: OAuth2 is a really shitty protocol. It is one of those technologies that get a lot of good press, because it enables you to do stuff you wouldn't be able to do in standardized manner without resorting to abysmal alternatives (SAML in this case). And because of that it shines in comparison. But looking at it from a secure protocol design perspective it is riddled with accidental complexity producing unnecessary footguns.
The main culprit is the idea to transfer security critical data over URLs. IIUC this was done to reduce state on the involved servers, but that advantage has completely vanished, if you follow today's best practices to use the PKCE, state and nonce parameter (together with the authorization code flow). And more than half of the attacks you need to prevent or mitigate with the modern extensions to the original OAuth concepts are possible because grabbing data from URLs is so easy: An attacker can trick you to use a malicious redirect URL? Lock down the possible redirects with an explicitly managed URL allow-list. URLs can be cached and later accessed by malicious parties? Don't transmit the main secret (bearer token) via URL parameters, but instead transmit an authorization code which you can exchange (exactly) once for the real bearer token. A malicious app can register your URL schema in your smartphone OS? Add PKCE via server-side state to prove that the second request is really from the same party as the first request...
It could have been so simple (see [1] for the OAuth2 roles): The client (third party application) opens a session at the authorization server, detailing the requested rights and scopes. The authorization server returns two random IDs – a public session identifier, and a secret session identifier for the client – and stores everything in the database. The client directs the user (resource owner) to the authorization server giving them the public session identifier (thus the user and possible attackers only ever have the possibility to see the public session identifier). The authorization server uses the public session identifier to look up all the details of the session (requested rights and scopes and who wants access) and presents that to the user (resource owner) for approval. When that is given, the user is directed back to the client carrying only the public session identifier (potentially not even that is necessary, if the user can be identified via cookies), and the client can fetch the bearer token from the authorization server using the secret session identifier. That would be so much easier...
Alas, we are stuck with OAuth2 for historic reasons.
You're right about the complexity and the steep learning curve, but there's hope that OAuth 2.1 will simplify this mess by forcing almost everyone to use a simple setup: authorization code + PKCE + dPoP. No "implicit flow" madness.
Another big problem with OAuth is the lack of quality client/server libraries. For example, in JS/Node, there's just one lone hero (https://github.com/panva) doing great work against an army of rubbish JWT/OAuth libs.
The problem with the authorization code flow is, it was not build with SPAs in mind. I.e. you always need a server-side component that obtains those tokens.
So a 100% client/FE solution based on NextJS/React/angular/vue etc. can not simply be deployed to a CDN and then use Auth0/AWS Cognito/Azure AD whatever without running and hosting your own server-side component.
The catch is that since the client web origin and AS web origin are often different sites, the AS has to actually implement CORS on their token endpoint.
Some implementations unfortunately (perhaps due to a misunderstanding about what CORS is meant to accomplish) make this a per-tenant/per-installation allowlist of origins on the AS.
Auth0 and Ping Identity (my employer) document CORS settings for products. I'm not sure about AWS and you might need to add CORS via API gateway. Azure AD supports CORS for the token endpoint, but they may limit domains in some manner (such as redirect uri of registered clients).
FWIW, I created a demo ages ago (at https://github.com/pingidentity/angular-spa-sample), which by default is configured to target Google for OpenID Connect and uses localhost for local development/testing. It hasn't aged particularly well in terms of library choices, but I do keep it running.
A deployment based on older Angular is also at https://angular-appauth.herokuapp.com to try - IIRC I used a node server just to deal with wildcard path resolution of the index file, but there's otherwise no local logic.
I appreciate you work on clarifying the situation. But my statement still stands, and you seem to back it up in your draft:
> The JavaScript application is then responsible for storing the access token (and optional refresh token) as securely as possible using appropriate browser APIs. As of the date of this publication there is no browser API that allows to store tokens in a completely secure way.
So with OAuth 2.0 + PKCE and no BE component, the tokens are directly exposed to the client, just as they were with the implicit flow. Also, if I'm not mistaken, PKCE extension is optional in OAuth 2.0, and without it you cannot securely use the code flow (as you would have to expose the client secret).
Storing access tokens in Javascript and storing them in a native application have about equal protections - but by far most Javascript apps are left far more susceptible to third party code execution.
The answer is typically to make such credentials incapable of being exfiltrated by adding proof-of-possession, such as the use of MTLS or of the upcoming DPoP mechanism.
Note that preventing exfiltration doesn't prevent a third party from injecting logic to remote-drive use of those tokens and their access sans exfiltration.
While access tokens can be requested with specifically limited scopes of access, a backend server could potentially further control the level of access a front-end has. The problem is that the backend and frontend are typically defined in terms of business requirements. As such, there hasn't been a clear opportunity for standardizing such approaches.
When using a backend, my advice is to be sure you don't just have your API take a session cookie in lieu of an access token. API are typically not constructed with protections from XSRF and the like (or rather, an access token header serves as an XSRF protection while just a session cookie will not).
> Also, if I'm not mistaken, PKCE extension is optional in OAuth 2.0
Correct - although it is strongly recommended by best current practices, including recommending deployments limit/block access to clients which do not use PKCE.
> …and without it you cannot securely use the code flow (as you would have to expose the client secret).
PKCE has nothing to do with client secrets or client authentication. It provides additional strong correlation between the initial front-end request with the following code exchange request.
It was written to support native apps, as many such apps used the system browser for the authorization step and then redirected back into a custom URL scheme. Since custom URL scheme registrations are not regulated, malicious apps could attempt to catch these redirects. PKCE provides a verification that the same client software created both the redirect to the authorization endpoint and the request to the token endpoint. Even if a malicious piece of software got the code, they wouldn't have a way to exchange it for an access token.
Some of the original OAuth security requirements for clients have been found to be poorly implemented, but that PKCE provides equivalent protections against these particular issues. Unlike client-only implementation logic, PKCE support is something that an AS can audit. Hence it being likely that PKCE will be a requirement in future versions of OAuth.
I really appreciate your effort, but somehow you just produce a lot of text without arguing against my point: OAuth 2.0 in a FE SPA is just broken, or "difficult" at best.
a) The text explains how its not broken
b) Its not difficult if you use a library that ensures this is done correctly. Otherwise yes, secure auth is difficult, just like its difficult anywhere else.
Even for 100% FE solutions, the current best practice from OAuth authors [1][2] is to use authorization code + PKCE (optionally, +dPoP). The implicit flow is deprecated (since PKCE), and from OAuth 2.1 it will be removed entirely.
It depends on the provider. For Mastodon and Pleroma, there's an endpoint to get generate a client ID/secret that you can call on the client. The flow is basically
1. Prompt for an instance name
2. Get a client id/secret from the instance and put it in localStorage
3. Redirect to the login page
4. Once you get the callback, get the token using the code and the client ID/secret from localStorage
5. You're done. No server needed.
SPA is HTML/JS served by the server. We don't need client-only solutions. We need devs to understand how HTTP and browsers work.
It means that we simply keep using what actually works, i.e. serverside component that obtains authorization and we use simple mechanisms to ensure token stays at the server and FE speaks to the server which in turn speaks to the target app. Proxying is not that difficult of a problem and we don't have to run in circles, inventing different flows only to cater to devs who can't learn their field.
We need CDN solutions for front-ends because that's the best way to deliver great, scalable performance for complex SPAs.
We also need a purely client-side flow for mobile (native) apps.
Additionally, the authorization code flow (with PKCE) in Keycloak still supports pure client side authorization. Its more complex than the implicit flow, but it doesn't really matter as any library (including keycloak-js) will take care to ensure its done correctly.
Honestly, DPoP[1] is pretty horrible. It is a partial re-implementation of TLS client authentication deep inside the TLS connection. What's wrong:
- No mandatory liveliness check. That means you don't know whether the proof of possession was indeed issued right now or has been pre-issued by an attacker with past access. Quoting from the spec[2]: """Malicious XSS code executed in the context of the browser-based client application is also in a position to create DPoP proofs with timestamp values in the future and exfiltrate them in conjunction with a token. These stolen artifacts can later be used together independent of the client application to access protected resources. To prevent this, servers can optionally require clients to include a server-chosen value into the proof that cannot be predicted by an attacker (nonce).""" This is a solved problem in TLS.
- The proof of possession doesn't cover much of an HTTP request, just "The HTTP method of the request to which the JWT is attached" and "The HTTP request URI [...], without query and fragment parts." It doesn't even cover the query parameters or POST body. Given rational: """The idea is sign just enough of the HTTP data to provide reasonable proof-of-possession with respect to the HTTP request. But that it be a minimal subset of the HTTP data so as to avoid the substantial difficulties inherent in attempting to normalize HTTP messages.""" In short: Because it is so damn difficult to do it on this layer. Of course, TLS covers everything in the connection.
- Validating the proofs, i.e. implementing the server side of the spec, is super complicated, see [3]. And to do it right you also need to check the uniqueness of the provided nonce see [4] which brings its own potential attack vectors. And to actually provide liveliness checks (see above) you have to implement a whole extra machinery to provide server chosen nonces see [5]. I expect several years until implementations are sufficiently bug free. Again, TLS has battle tested implementations ready.
Best of all? There is already a spec to do certificate based proof of possessions using mutual TLS! See [6]. We really should invest our time in fixing our software stack to use the latter (e.g. by adding JavaScript initiated mTLS in browsers) instead of yet another band aid in the wrong protocol layer.
For most Keycloak users, a very tiny subset of OIDC is being used too. Usually there is no three way relationship between a third party developer, an API provider and a user anymore. You could rip scopes out of Keycloak and few users wouldn't be able to cover their use cases. Rarely is there more than one set of scopes being used with the same client.
Keycloak also supports some very obscure specs, my favourite probably being "Client Initiated Backend Authentication" which can enable a push message sent to authenticator app type authentication flow using a lot of polling and/or webhooks.
Can you disclose the number of users & apps you have? Are you using Keycloak or do you pay for Red Hat Single Sign-On (for context, that's the name of the downstream product that Red Hat sell subscriptions for).
The downside to using Red Hat Single Sign-On is that it is a vastly inferior product to using Keycloak upstream as it is so many versions behind.
This means that bug fixes and features haven't trickled down yet. Although RH SSO 7.5 jumped from Keycloak version 9.0.17 (in RH SSO 7.4) to 15.0.2 so there's some improvement there... but Keycloak just released 18.0.0...
Thanks for mentioning this, I didn't realize Keycloak is a RedHat product. I'll plan to move to something else. Anything RedHat makes turns into a catastrophe.
My company used Keycloak for a long time (I'm not there any more) and I agree with everyone here, it works great, but it's hard to understand unless you already know oauth/oidc, and it is a huge binary.
While Keycloak is a great out-of-the-box solution, my #1 complaint at the time was how heavyweight it was, which was a burden for development, followed closely by its packaging as a J2EE app and bundling with Wildfly (at the time).
This meant we needed to know not only about Keycloak itself, but also about Wildfly's special quirks, the clustering system (Infinispan??), Java and Docker.
Now it's packaged with Quarkus, which is another dependency to learn about, and to be honest despite the quality of the finished product, all those dependencies have become pretty off-putting.
So while I can recommend Keycloak's functionality, if you're not already deploying Java apps as part of your job, I suspect it will present a pretty serious administrative burden to deploy into serious production.
For me this is all kind of opaque, apart from a theme put into a folder and some Environment Variables set i don't touch anything in Keycloak, first had no requirement to consider it and second very likely would be doing something maybe not best practice i.e oidc, oauth based.
Its the only java app we run in stack, but doesn't matter to me in Docker and within Windows we run a portable java from a subfolder of keycloak so not System wide in Path
I was interested in Zitadel, but because it requires Kubernetes, it can't replace Keycloak in my docker-compose managed homelab setup. If you could just run it as a standalone container, I'd give it a shot.
Sounds like you would be interested in v2 (https://zitadel.ch/v2) as in it will be provided as single binary, which you should be able to use in our homelab setup.
There are a lot of other improvements, but if that's the only dealbreaker, v2 should take care of it.
True, we think that for most setups and early explorations K8s is too much of a beast. TBH we are in the process of switching our cloud from K8s to Cloud Run which is kind of close to Knative but without the ops hustle ;-)
What do you not like of the new pricing? The pay-as-you-go approach?
The background to this is, that we got many customers asking for features on the lower tiers and that many of them wanted to have a more linear scaling in their business cases. On the other hand we wanted to make sure, that even from the start you get all security features without paywalling them.
The 25K request roughly translate to 10K user per Month and you will be able to get that without a credit card :-)
What is your opinion on ORY, specifically ORY Kratos? We have been building on Kratos for some time now and find that it is not super well documented, but it is still a very pleasant experience and their ORY Cloud project is backed by support from their team.
How does Zitadel differ/compare? Do you have similar goals as an organization?
Well to keep it brief, we see it the following way. Use:
- ZITADEL: If you want turnkey solution built for the cloud with a great support for B2B, a strong audit trail and self-hosting, but also the option for SaaS
- Ory: If you want flexibility to customize all the stuff but are aware that it is not as turnkey as ZITADEL and Keycloak
- Keycloak: If you want turnkey with a high maturity and a lot of features but some lack in regard to B2B, cloud native and support
This more or less reflects my opinion about Ory. I like the way they built their suite because its totally flexible. But it dislike the fact that it does not feel like a turnkey solution and needs some more work than plugin in a OIDC client.
So what we think and aim for is that ZITADEL combines the best of Auth0 and Keycloak [1] while bringing some unique features to the table. For example an unlimited audit trail (built with event sourcing), great self-service capabilities for B2B and B2C cases (customers can manage their own org, user, federation, access) and the possibility to soon run "serverless" (well at least as serverless container). With our cloud service we also allow customer soon to move their data around (see data location [2]). And all of this while being totally open source.
Looks nice, but the CockroachDB dependency is kinda a pain for people not already using that. Any plans for alternative db support, like Maria or Postgres?
If CockroachDB supports the PG wire protocol and appears to be a functional subset of PG then in theory we might be able to just point it at a PG database and it might work :)
I too would prefer to use PG than Cockroach, if only because I have ops experience with PG. But Cockroach is certainly intriguing.
Thank, we hear you! Currently we are discussing two things for this subject.
1) We think providing an embedded DB of some kind for easy to use cases with ZITADEL might be favourable to some of you (think Sqlite or an embedded CockroachDB )
2) To allow plain Postgresql besides CockroachDB should be an easy thing to do since we already make use of PG wire protocol. We plan to address this in a 2.X release
What do you think of this? Or what DB would on your Wishlist?
Thanks for taking the time to answer everyone's questions, and for asking me this one.
My own personal preference would be if you could please support PG, because in that case I could be confident of going into production with what I configure, even if it's less scaleable than CockroachDB. I'd really prefer to be learning about how Zitadel works, rather than learning CDB or trying to remember sqlite cli commands.
I'm happy to give CDB a spin once Zitadel 2.0 comes out, but it will add to the burden of onboarding it into my stack, and I'd think that's something you'd like to avoid going forward :)
I would love to have the embedded db option as well.
Reason: for demos on conferences but more importantly in a training/workshop or Tutorial scenario it should be easy to get it running and let students replicate the setup. I assume that this can be a nice multiplier and is underrated by many „modern“ projects. (Elastic apm 8.x: I am looking at you)
Make sense, thank you for the answer! I do think the turnkey solution middleground is badly needed and I'm excited to see where ZITADEL goes. We've already committed heavily to Ory on this project, but maybe on the next one we'll be able to explore ZITADEL!
That's really surprising to me. I've been building some things with Ory Kratos myself and I find the documentation pretty good. There's definitely room for improvement but I'm typically able to find what I need pretty easily. I certainly find it much more usable than the Keycloak docs.
Was working at a Java shop once which used Keycloak as a central IAM solution. As an FE Dev, I was tasked to customize/style the login-page provided by Keycloak, and quickly faced what you described: Pretty heavily Java-based, even to edit HTML templates I had to recompile using a full blown Java/JVM stack.
As an FE dev without Java background, this became pretty difficult. But once we finished that with the help of some of the BE Java devs, it ran (and still runs) quite stable and also the KeycloakJS adapter I integrated was alright without much surprises.
> While creating a theme it’s a good idea to disable caching as this makes it possible to edit theme resources directly from the themes directory without restarting Keycloak.
I customized Keycloak 10 login page a while back, and it did not require anything but markup. My Keycloak 18 instance runs the same customization now, unchanged.
> my #1 complaint at the time was how heavyweight it was, which was a burden for development
What do you mean by "heavyweight"?
I ask because Java in other large open source projects (Elastic Search, Cassandra, Android) and it really depends on how its being used (by Keycloak as well)
I've also come across GraalVM which can significantly reduce Java memory consumption (if that is something you are referring to)
Well just bear in mind that this was about 4 years ago (we were fairly early adopters apparently), and I understand that my comments here are probably no longer fair, and probably don't apply to KC at it is today.
But since you asked, basically Keycloak was by far the single largest component of our stack, which didn't sit well with me because by any measure, our application was far more complex than Keycloak.
At the time we were transitioning away from JEE to JSE microservices, and Keycloak was a full-blown JEE application with all the heaviness that we were successfully leaving behind; it took over a minute to start, while our other Java services would launch in less than a second. Of course this didn't impact production at all - but most engineering time is spent in development and testing, where this was a real problem, especially if we were experimenting with and learning about new features.
All of this contributed to my perception of heaviness.
We hear comments like this a lot. Keycloak has a lot of functionality but also a lot of quirks. We have a product, FusionAuth, that folks often consider at the same time.
Similarities between our products:
* Overall base feature set (OAuth, OIDC, SAML, user management, authentication, RBAC) is similar.
* Both written in Java.
* Both use container technology to hide Java from you :)
* Both develop in the open (we use GitHub issues, they use a mailing list).
Differences:
* They're OSS, we are free as in beer.
* I haven't found a compelling hosting solution for Keycloak, most folks self host. FusionAuth offers a hosted product if you'd like.
* Keycloak has more niche features (CAS SSO support) and a bigger community.
* FusionAuth has better, more straightforward docs.
* FusionAuth user UI customization is easier.
* FusionAuth supports a number of languages with client libraries for easier config management. I only saw a python client library for Keycloak.
* FusionAuth supports unlimited tenants, limited only by your server's resources. We have folks running thousands of tenants. Last time I looked Keycloak had issues around 400 realms (their term for tenants): https://keycloak.discourse.group/t/maximum-limit-of-realms/8...
I would offer two suggestions that might improve reception:
1. Put that disclaimer at the top
2. Shrink it to no more than a couple sentences and link to a blog post or something for the detailed comparison. It looks pre-canned and kind of spammy the way it is now
Ah, thanks. It probably depends on what you mean by tenant; there's no real accepted definition. By tenant, I mean a separate user space, and I think that is analogous to Keycloak realms. (So [email protected] in tenant 1 has no connection at all with [email protected] in tenant 2; separate passwords, different profile data, etc.)
But it sure looks like you can have multiple tenants (in your SaaS application) all against one realm in Keycloak if you don't want that level of separation. In fact, here's an article from 2020 discussing that exact architecture: https://medium.com/swlh/using-keycloak-for-multi-tenancy-wit...
Separate user spaces are achievable within a single realm. In our case, we do it using an account chooser within our application code. After logging in, you select the account you want to work within.
I think of realms as part of a trust model. No realm can safely "trust" another realm's user identities. For example, if I own my own realm, I can set up my own bogus idp and pretend to be the user [email protected], even though I don't own that email or have any access to it.
In our case, the exception is the central realm, where all customers who don't need SSO live. In SaaS, this is usually most customers (> 90% for us). Only larger and more security-minded orgs will want their own SSO and hence their own realm. In the central realm, anyone can authenticate through a set of "trusted" idps (Facebook, Google etc.) as well as good old email and password. This does make the central realm vulnerable to compromise of e.g. Facebook.
We tried using keycloak in a startup where I worked. It needed a loooooooot of memory and was very slow to start. It probably needed some JVM tuning, but we were just deploying as a stateful set (for the postgres). The docker images were also huge.
We had to use another FOSS project called Gatekeeper as an authentication software along with keycloak, which got obsoleted and replaced with a different project (lukedo proxy or some such).
The community support was also relatively not that active like other FOSS project that we used (for other areas of the stack). Overall, the experience was not so great that we decided to ditch both keycloak and gatekeeper. This was about a couple of years back. Not sure what the current status is.
Some other alternatives that we wanted to try were from the Ory project (Kratos etc.) But we just went with some proprietary AUTH solution in our startup.
We have been using it for 6-7 years now. Have been able to run very stable and integrated a lot of external IdP's to offer proper SSO on mulitple of our software stacks. Added a lot of other open source pieces of software we run for our backoffice needs (hashi vault, adminbro, etc.). So far very happy with it. Running in clustered mode without issues and as such little issues with the startup times. It probably helps that we have a solid background in Java based development and deployment and are less worried about the amount of memory it uses for the full suite.
This fascinated me - where and how does this fit into other identity providers (and thence into SSO).
I kind of yearn for client certificates everywhere simply because I can grok how that remains secure as we pass through layer after layer. the rest I just worry about
keycloak can broker between identity providers. It can use social logins as identity providers, connect to ldap, kerberos and others for user federation, and then provide SAML and OpenIDC to other applications.
Exactly this. OIDC and SAML integrations with customers IdP's. Map identity metadata from the customer into our realm so they can provide data in any way they want and we map it down to our standard which allows our applications to stay clean when using this metadata for business logic.
We have also added an event plugin to keycloak to push login events to a queue for other services to consume.
We also offer local keycloak identities in case a customer does not or can not provide their own identities, and have added haveibeenpwnd logic to check password strength/reuse for these local keycloak identities.
As someone who has superficially looked into it a couple times and gotten pushed away by the complexity: what do you recommend for a backend? Is there another container that provides an LDAP service I could use? Or Kerberos?
I am rebuilding my homelab soon and I am interested in having centralized auth across all systems and as many applications as feasible, using my centralized fileserver as an IDP source via some application or another, as well as using Keycloak for some one-off projects where I don't really want to write a user layer.
Personally I just use Keycloak as the backend(storing the user and group information). I provision it with terraform since I find it easier to use than the webUI.
With Quarkus, they went into other extreme -- if you are using docker, to get as fast startup as possible, you have to build your image with your configuration/modules used baked in.
Overall, I'm pretty satisfied. There are some bumps, but they are not Keycloak fault (having two different keytabs for two different host names for two different container images sharing same IP and reverse name not matching is kind of difficult).
In my experience, Keycloak is best treated as a "pet" in the "pet v.s. cattle" spectrum. It takes a while to warm up, so you don't want to be constantly restarting it. I deployed it out-of-sync with the main application deployments.
As an open source option, it's quite powerful and full-featured. It's also quite configurable.
If I had one feature ask, it's that it doesn't play well with infrastructure-as-code ideas. While you can load a new realm from a JSON, it's harder to keep changes synced after that.
Have used and brought keycloak into many companies over the years as a solution. Steep learning curve a little. But it essentially works as designed either as the IDP (rare in my exp) or as a IAM broker more common.
Big companies need it because their hands are tied to old and inflexible vendor's APIs. However they can with some effort craft a branded and modern UI/UX. Backend works with just about anything old Auth related whilst supporting a newer modern Auth schemes.
I am surprised IBM has not made RHEL ruin it yet.
To say IBM is a slightly better steward of their open source efforts than Oracle never leaves one with much comfort.
RedHat never needed anyone's help to ruin things. Their solutions are poorly designed bloated crap that "can get the job done" if you run them within a RedHat platform and don't mind banging your head against a brick wall. Just because they're open source darlings doesn't mean we can't call a spade a spade.
The secret to making keycloak UI/UX good is to disregard the account console and build your own with the new accounts API (which the accounts2 console also uses).
Also, if you just use one broker you can skip the login experience entirely.
I was interested in Authentik, but I was perusing the docs and was extremely put off by the way in which Authentik manages itself as well as additional “outposts”.
> The docker integration will automatically deploy and manage outpost containers using the Docker HTTP API.
> This integration has the advantage over manual deployments of automatic updates (whenever authentik is updated, it updates the outposts)
NO. I do not want software I use to update itself automatically in prod. And I especially do not want to give a docker socket to it so that it can automatically add new components.
I’ve been a keycloak advocate since my jboss days (really the only good thing that came out of jboss). I have never heard of authentik and I’m so glad you mentioned it, it looks amazing! I had been contemplating doing a similar project but this would satisfy that need. Thank you.
This looks awesome! I dropped Keycloak because it's 2 GiB of RAM was too much for me to commit to SSO on my tiny VPS, so I just switched to static htpasswd management. But this looks like it might be a great replacement.
It seems to support groups/selective access to services through group membership, which is great. Does it support username authentication or does it require that an LDAP server or other OIDP is used as a source of truth?
Using Keycloak for all authentication both in the cloud under Linux and within Windows Enterprise Environments. Use it for SSO for Node-RED, Grafana, Aspnet & VueJS apps. Was fairly easy to move from Auth0.
(Caddy maintainer here) I don't use that plugin myself but AFAICT most users ask questions on the GitHub repo so probably best to ask for help there if you need it.
Thank you, my question was more a kind of bottle to the sea, if anybody was using it. I was able to configure the plugin nicely and get gitea as a source of users and groups to control access to other applications.
And by the way, thank you, I am really impressed by the quality of Caddy.
I have a few services on my family server (say, Gitea, Grafana, finance tracking app etc.). I'd like to have a SSO but also limit which users can use which services (e.g. my significant other can use Grafana but no Gitea).
Is integrating above services with Keycloak enough? Or would I need another components? Or maybe I've got it wrong and should reconsider the architecture?
It will definitely work - Keycloak can provide its own user database, or it can use external one, as well as do some crazier things that go outside of the scope you mentioned.
In simplest setup (non-HA, local user database), you would create users inside Keycloak, assign them to different groups, then create applications (which handle configuration for individual applications like grafana and gitea) and create rules that specify that only users that belong to specific group can login to specific application.
You can also allow linking multiple external SSOs this way to single keycloak identity, and even include login through kerberos5 or client certificates.
I've integrated keycloak for SSO in our apps, we have a one-to-one mapping between keycloak realms and the tenants in our apps, works great and it can glue together many disparate IdP/auth solutions: onelogin, LDAP/AD, etc. At one moment we needed to implement a custom OTP method, so we developed a plugin by implementing a simple SPI (the auth flow is configurable).
It is a very stable piece of software, we have a combination of Terraform+ansible scripts to deploy it, and once it is done we forget about it until we need to upgrade the version using the same scripts. The biggest drawback IMHO is the documentation quality ...
As others mentioned, Keycloak is a good choice if you need a self-hosted IAM solution and are familiar with Java development.
If you don't need selfhosted, I can recommend using Amazon AWS Cognito as a OAuth2/IAM solution - it is included in the free tier for up to 50.000 MAUs, plus the signup/lost password mails etc. are sent through Amazon SES, which heavily increases the inboxing rate. You could always transition later to a self-hosted solution like keycloak. Given both are OAuth2, that transition should be smooth.
What are your main reasons for recommending Cognito? That it is free and easy to get going with?
Have you customized the user login experience?
I only ask because I've heard folks talk about how Cognito does the basics right (which is great, no one should roll their own auth) and is quick to get started with, and is serverless and free (unless you want SAML connections).
But once you get past the basics, it turns into a ton of hassle. And there's been little progress in feature set/docs/etc (though last year they did do a UI refresh).
They also don't let you export your password hashes, so when you transition to a self hosted solution, you must force your users to reset their passwords (or perform a drip migration). I wrote about these options here: https://fusionauth.io/blog/2022/02/07/how-to-migrate-from-co...
Disclosure, I work for a Cognito competitor, FusionAuth.
Yes, I mostly recommend it because it is free and I've used before, plus I've been on several projects where we used Keycloak. For me, Cognito's 50.000 monthly active users is probably all I will ever need, and as you said, its quick to get started. Keycloak is the other way around, you will have to invest beforehand into learning and running it. Also, if you allow self-signup by your users, Email inboxing is a serious issue. So even if you run Keycloak yourself, you must think how you are going to deliver emails - which will bring you to other SaaS services like MailChimp or AWS SES anyway.
I can't speak about FusionAuth or Auth0, but yes, other SaaS might be as good as Cognito as well - but I do not know their free tiers.
Disclaimer: I am not working for any of those, just a small side-gig SaaS builder/WebDev speaking of my experience.
Thanks for sharing your rationale, makes a ton of sense. Cognito's free tier is great and frankly, takes auth off of many folks' plate. Which is a good thing.
I've heard similar things about Azure AD B2C. Strangely, Google has a comparable offering ( https://cloud.google.com/identity-platform ) but I've never talked with anyone using it.
As someone who works with Azure B2C on day to day basis, I wouldn't recommend it to my worst enemy. It's a shitshow. It's impossible to maintain, heavily obfuscated by policies being xml instead of code, with weird naming everywhere. Very, very limited in what it can do and how(try making somewhat of a 'switch' statement). It usually ends up being a Frankenstein of copy-pasted sample policies from their Github that barely work. Debugging is a nightmare, logging to App Insights in production shouldn't be used(it requires 'Development mode') and even if it would, logs it produces are terrible and usually say nothing, with most errors being 'Internal Server Error' and nothing else. UI is also very 90s-like and to fully customize requires a ton of jquery and magic strings hardcoded everywhere. To customize emails in any way you still have to pay for Sendgrid anyway. I genuinely hate the experience.
I just checked prices of FusionAuth, and clearly your company is not interested in smaller side-gig like customers or self-funded startup that need to grow. Basic, production cloud options (non-eval) start at $162/mo for 10.000 MAUs. Once I move the slide to over 10.000MAUs the basic option is gone, and the cheapest option suddenly jumps to $1062/mo.
For smaller companies, we recommend business cloud with community edition, which starts at $225/month. The basic hosted version doesn't have backups and so isn't suitable for prod use. I get that this is a lot for a side project (I wouldn't use it for one). Or a self-funded startup--I remember one startup where the entire application was running on about $75/month in hosting spend on heroku. No way would I have paid $225/month for auth.
We have a slightly complicated pricing model (with both hosting and licensed editions, creating a matrix that is not typical), but I truly appreciate your feedback and will share it internally.
> For your use case, I'd probably recommend self hosting community edition. FusionAuth price: $0.
But self-hosting has the same issue as Keycloak: Email inboxing. You do not want your signup verification emails to land in spam folders. So you end up paying for an email mailboxing provider, at which point I'd rather go with a hosted auth solution that takes care of that.
Recently deployed Keycloak as a front end to hide legacy auth in our org and put OAuth in front of it. Despite the apparent complexity under the hood, it works great and no complaints so far. By and large, it just does what it says on the tin and works.
Is Keycloak a good option if I want to setup a SAML Service Provider using user records from my own MySQL database? I've looked at Okta and Keycloak and it's not really obvious to whether I'm supposed to give up my User table and let the auth system handle it, or whether the user data ends up being spread between my DB and the auth system (I think that's how Okta would be implemented).
I know I could roll my own with PassportJS or something, but I'd like all the nice Okta stuff (MFA, password policies, SAML SSO, maybe federation) but integrated with my existing DB. Or is that just too much to ask?
Are you asking if you can use Keycloak with your own user table? Typically these identity providers want to own the user, so would expect you to port the user info, including password hashes, into them.
If you have data that is user related but not auth related (application specific data), I've seen a few patterns:
* Push it all into the auth provider. Not sure about keycloak, but some providers have the ability to store arbitrary data (a blob, basically) about a user.
* Create a table in your database with an identifier provided by Keycloak, preferably an immutable one. Then when a user logs in, you can find their identifier, then look up the application specific data.
If you want to have all PII in one place, the former option is best. If you want to maximize your flexibility, the latter is what I'd suggest.
If you want to keep the user data in your database, I'd look at a library (as you suggest). It's a different class of solution than a standalone auth provider like Keycloak.
> Are you asking if you can use Keycloak with your own user table?
Yes, that's what I'm asking. Because my User table has relationships with the rest of my database, and I want to keep that (user X owns application-item Y; User X created Comment Z on application-item Y, etc... you get it).
> Create a table in your database with an identifier provided by Keycloak, preferably an immutable one. Then when a user logs in, you can find their identifier, then look up the application specific data.
This looks like a good option to me. I'd be fine with PII in Keycloak, and app-specific stuff in my database, provided there's a solid link between the two. If the user wants to change their email, that goes into Keyclock, and if they do something else that is connected to the application function that goes into the app DB.
Transitioning a preexisting web stack in our corporate network from Identity Server to Keycloak has been my extremely rough intro to the world of auth. I would say I’m almost there, but have one issue holding me up. We have a few different data enclaves, including one that requires users to sign an NDA and be added to an AD group. I’ve been searching high and low to see if Keycloak has a simple flag to say “don’t let anyone in that isn’t a member of this AD group”. Does that exist or do I have to create groups in Keycloak itself and add users manually?
Keycloak is a great piece of software if you have to authenticate against AD (on prem, not Azure AD). It's the best way to isolate all the crap like "user accounts live in this OU, but admin accounts live only in that OU. Oh, and we also have another domain where contractors live in the same OUs. And the groups that map to application roles are the same, but live in differently named OUs" and provide a simple OAuth 2.0/OIDC authentication/authorization interface to all this mess.
My biggest issue in the version I was evaluating:
Some service providers use “email” as username (in fact many do.)
Keycloak doesn’t make it easy to prohibit users from changing their own email, making it trivial to impersonate someone else and gain access one shouldn’t have.
Keycloak actually makes it very easy now, assuming you have account-api and account2 feature flags set (default these days). You remove "manage-account" inside the "account" client from the default roles. Do mind this breaks the account console for those users (which is what you probably want anyway).
So you could only allow an email change if the user proved they owned the new email account by clicking a link or entering a code sent to that account?
Seems like a natural option.
Of course, allowing you to disallow email changes seems pretty reasonable too.
The issue if I remember correctly was that you could require the email to be verified. But while that verification was pending, it would already use the new email as the user’s asserted attribute.
For those that may not be aware, Keycloak is the upstream of Red Hat's SSO solution "Red Hat Single Sign-On", so there is a lot of weight behind it's development.
I'm building something with it currently and it's quite nice, especially if you are already familiar with spring security. Documentation is quite sparse tho.
I've integrated keycloak for SSO between several self hosted apps and boy I had no idea what I was doing but copy paste config from left and right from various sites which worked in the end but does SSO have to be this complicated?
Can someone recommend a simpler solution to integrate SSO with online tools (like NextCloud, Discourse, Wiki.js, Gitea etc)?
I've used many solutions, even built my own in the end. Problem is that there's no software that can make SSO easy to understand if you don't know everything about SSO to begin with.
SSO is not that complicated to work with, the docs are just stupidly difficult to read and the point of the whole process is rarely explained. Learning how without knowing why is nearly impossible.
My biggest complaint with Keycloak is that the documentation is poor. You will need to set various flags to fit your use-case. Lots of googling and SO to get things running. Some of the options have changed since 17.x so many guides are outdated.
This was definitely true for the Wildfly/JBoss version, but on Quarkus configuring Keycloak is mostly trivial now[1]. The only options I had to set outside environment variables in production are related to Infinispan discovery.
I used Keycloak about 4 or 5 years ago in a former job. It did work very well. Note however, that we did not need to customize anything nor did we have to deal with scaling (in house web-app where it was rare to have more than 100 people using on it at any given day).
Right now, I'm looking into https://supertokens.com/
I have not used it in any capacity but it does seem much more approachable -at least to me- and allows for much flexibility (code wise on your own Backend).
I would be interested in hearing if anyone here has any feedback.
I'm of the project creators. Happy to discuss your specific use case and evaluate different providers.
Some recent feeedback:
"Hi yes, we are testing SuperTokens right now, and we are loving it
Thanks for the great tool btw. it is really comfortable to integrate it into the project.
at first we decided to use Keycloak, spent a week to customize it for our needs. It was soooo painful :sweat_smile: then we started looking for alternatives"
"Been using it a bit for a pet project and I am quite impressed of how many integrations and documentation I find on its page. Also, really easy to use!"
Just in case you are wondering where I found out about SuperTokens, it was on HackerNews"
Is there a minimal config to run and setup keycloak with docker for local development? Most sources suggest exporting and reusing a reale-export.json, but it is missing the user datas and includes lots of (default) options and random uuids. There is a example repo, but it seems out of date and missing some settings: https://github.com/keycloak/keycloak-demo/blob/master/demo-r...
My biggest complaint though, is the storage of configuration in a database. This makes deployment complex, because you can't just deploy code, you must also mutate your database on each deploy (in many cases).
I'm looking into a more modern approach, perhaps disallowing any system admin access (only user admin) and requiring all configurations changes to be applied via code using the API... please provide any recommendations for a better approach!
Does Keycloak or any of the alternatives mentioned here do a good job of supporting localization? What about customization of the email messages for lost password flows?
zitadel allows to change the texts of emails and login texts, but at the moment only for languages already supported at the moment of writing en, it, de
Recently been using it and have been happy with it. Being able to plug some of our existing providers in and have one solution to integrate with has been the big win.
Slightly off-topic: Could anybody recommend a lightweight, self-hosted php IAM that would handle new accounts (with email confirmation), password recovery, maybe user groups? I've been using Wordpress a couple of times just for the user management, not very proud of that but I didn't know better :/
Geez, I don't know of any. If you have to stick with PHP (as opposed to using some of the other solutions mentioned in comments), I'd probably look to a framework. For example, here's Laravel's offering: https://laravel.com/docs/9.x/authentication#authentication-q...
I'm not looking for SSO. Did I use IAM wrongly?
I'm talking about tiny apps that do require a protected back-end access, generally for somewhat bootstrapped CSOs. Not looking for anything fancy, but rather quick and dirty. For example, updating a SQLite db via gSheets API.
PHP as that's default for shared hosting.
This part of the IAM space (SSO and AuthN) is so crowded with products, both paid and open source. When you look at governance, certification, lifecycle management, approvals, etc. there is almost nothing by comparison. A couple not so great commercial products and very little open source. Hopefully that changes.
Agreed, I mean that's why we built our solution completely open source in an open saas model and certified [1] our solution. We even provide our pentest results publicly, after mitigation of course [2].
The value we provide, when you are using our cloud, is the operational peace of mind, global scalability with data residency and access to deep technological knowledge.
We wrote some word about that in our blog [3] as well.
In a large, highly regulated business, these are all features that are needed in an IAM platform. The specific regulations vary by industry, country, etc.
Onboarding - birthright provisioning of accounts across many systems. Email, directory, etc.
Termination - automatically remove application level access across the business, not just the user's sso access.
Approval - the ability to request access to a system, have it go through a series approvals (which are audited) and then if approved, provision the correct level of access in the end system.
Certification - the ability to do periodic access reviews of users. This is typically run yearly or quarterly and you would be asking the user's manager and possibly the application owner to review their access and decide if it is appropriate. If the choice is made to revoke it the IAM system should go directly to the application and remove their access.
Yes SCIM covers some of this, but it is just a protocol.
I had to decide between Keycloak and Supertokens just last week.. Gone through both of their documentation and repos and decided to go with Supertokens. Backend and Frontend flexibility and their instant Discord support to my questions was a huge plus for me.
Keycloak is great if you need an SSO solution. We had a large number of small in-house applications at my previous job and managing logins was becoming a pain.
At my current job we use Okta. While the documentation is better, I don't think I prefer it over KC.
Keycloak is great, using it in several projects. The only thing I've been missing is support for IdP discovery services, such as https://seamlessaccess.org/
We are currently using Shibboleth, and would love to get away from using java/Tomcat. It looks like Keycloak also uses java. Is there an alternative to this that doesn't require it?
Does it/will it support Forward Auth style usage for plugging in with reverse proxies? Asking as a Caddy maintainer, I'm working on this right now and we've been working with Authelia to test it out, would be cool to get it working with another Go auth server! See https://github.com/caddyserver/caddy/pull/4739
First and most importantly, thank you for maintaining such a cool project with caddy! I use it all the time as nginx alternative (even though of being an nginx fan in the past).
We do not directly support "forward auth" concepts. But what you could do is to use OpenID Connect to Authenticate the user prior of allowing traffic to flow to upstream services. That's more or less what the oauth2-proxy does as well.
The reason why we are not so fond of "forward auth" is that in many setups authentication needs to scale beyond on ingress and in that case it makes more sense to create a centralised session for a user with an identity system.
If you are intrigued to discuss this subject I would encourage you to join our discord https://zitadel.ch/chat
ORY is amazing, but it also requires significiant investment. It's a headless API (so you never have to touch OAuth/OIDC internals) for building your own IdP.
Anyone who has used wso2 identity server? It's an open source identity solution with many customizations available, but it requires a license for commercial use though.
I've used both the opensource WSO2 and Keycloak. That latter is better designed and suffers fewer glitches, which is important when dealing with the complexities of oidc/oauth2. It's not that WSO2 doesn't do what it says on the tin; it works. Keycloak just works so well it's almost fun.
One feature Keycloak lacks compared to WSO2 is SCIM (System for Cross-domain Identity Management). That actually matters to me. There is a third party Keycloak extension[1] that implements SCIM, but I can't speak to it.
What would you say have been the positives and negatives of it? Is the learning curve above and beyond OIDC/OAuth? How much did you have to customize it?
Keycloak just assumes you know OIDC terminology, and it has some quirks that you might not expect (e.g. until recently, client credential grants created a refresh token).
It also, concerningly, uses some OIDC terminology outside of OIDC. There are two kinds of scopes in Keycloak. OIDC scopes (that are a set of mappers and represent permissions) and which client and realm roles can be included in a token (including the famous "Full Scope allowed" option that'll dump all roles into your token if you use the default OIDC scope).
A lot of behavior is also just implicit. Particularly in the Authentication Flow Editor. You just gotta know what you want if you want to customize them. The Audience mapper is another tricky one, relying on the (not OIDC) scope to figure out which client to put into the "aud" claim.
From what I've heard, Keycloak is better supported and has a bigger community that Gluu (12k stars vs 300 stars on GH). They are both open source and in Java. I've heard from users that Gluu can be a resource hog.
It depends on what your needs are. If you are using it for CIAM (customer identity and access management) then you are probably going to be putting it on the internet. If using it for internal IAM (identity and access management, also known as workforce identity) you can keep it inside your network.
Thanks. I was mostly wondering how comfortable "Hacker News" is exposing it publicly... going on zero experience and just gut feeling I put it at about the same level as RDP.
Yes, I've seen it used both as primary factor and as a second authentication factor (I'm not sure you would ever need it as 2FA, but whatever). Requires some setup, though.
I'm not a big fan of Java personally, as in I would not start a side project with it.
But in this case it's actually a great choice. Static typing, stable, mature, good performance, known by tons of engineers all over the world... And heavily used in the enterprise market which has a very real need for SSO.
I think keycloak is fantastic. The only thing I don't really like is how the user profile is deeply baked into the code. It seems they're working on improving this, but at least currently, it is not possible to have users that do not use a "First name" and a "Last name". Which is rather annoying when using it for things like game servers, where people just want to go by their username.
Keycloaks documentation seems vast, but isn't. There is also no way to search inside their documentation. It's a pity.
A better documentation is contained in the administration web ui itself. There are so many "hints" and tooltips for almost every option there is. It really helped me a lot.
Keycloak is good software. It never failed for me. Even upgrading from 7.x.x to 16.x.x somehow just worked.
Yes, their docker image is fat, but it's also very flexible. Now that they are basing Keycloak on Quarkus instead on Wildfly, the docker image should shrink in size.
ok, still big :).Beware: they aren't using Docker Hub anymore. Newer versions are on Quay only (https://quay.io/repository/keycloak/keycloak).
I'm happy with Keycloak. Also nice folks around Keycloak.