Hacker Newsnew | past | comments | ask | show | jobs | submit | makkesk8's commentslogin

Running Gitlab in any kind of scale beyond a single server is a major PITA. And it's very poorly optimized.


Gods, for some reason GitLab consumes 5-10% of a CPU at all times. I spent weeks trying to get it to calm down to reduce our AWS spend. Absolutely no changes no matter what I tried. On my 2013 Xeon server at home it's even worse.

GitLab is great, I really do enjoy working with it. I hate running it.


Yeah, I love gitlab as a user - but as an admin, the performance feels like something out the 90s. I had to use the gitlab-rails REPL console for something a couple of weeks ago. Even on a server with tons of headroom, it took *10 minutes* to start up?


That I am well aware of. I hate running Gitlab. I jsut wonder what different features different people are missing.


Awesome work! A killer feature of sqlite that I would love to see in pglite would be javascript window functions.


Huh, can you do that? I can't google anything just now, got a link to some docs about js window functions in sqlite?! Sounds very powerful


Maybe they're talking about https://www.sqlite.org/appfunc.html where you can define a window fn in the (perhaps JS) app via `sqlite3_create_window_function()`.


This is high on my list, I have a few ideas how to do it, one being a "PL/JS" extension that calls out to the host JS environment to run JS functions.



I think PG is relatively ideal for that. In a classical data warehousing/ETL context, I've called python directly from inside PG, which has its quirks but is pretty doable, all in all...

https://www.postgresql.org/docs/current/plpython.html


We moved over to garage after running minio in production with about ~2PB after about 2 years of headache. Minio does not deal with small files very well, rightfully so, since they don't keep a separate index of the files other than straight on disk. While ssd's can mask this issue to some extent, spinning rust, not so much. And speaking of replication, this just works... Minio's approach even with synchronous mode turned on, tends to fall behind, and again small files will pretty much break it all together.

We saw about 20-30x performance gain overall after moving to garage for our specific use case.


quick question for advice - we have been evaluating minio for a in-house deployed storage for ML data. this is financial data which we have to comply on a crap ton of regulations.

so we wanted lots of compliance features - like access logs, access approvals, short lived (time bound) accesses, etc etc.

how would you compare garage vs minio on that front ?


You will probably put a proxy in front of it, so do your audit logging there (nginx ingress mirror mode works pretty good for that)


As a competing theory, since both Minio and Garage are open source, if it were my stack I'd patch them to log with the granularity one wished since in my mental model the system of record will always have more information than a simple HTTP proxy in front of them

Plus, in the spirit of open source, it's very likely that if one person has this need then others have this need, too, and thus the whole ecosystem grows versus everyone having one more point of failure in the HTTP traversal


Hmm... maybe??? If you have a central audit log, what is the probability that whatever gets implemented in all the open (and closed) source projects will be compatible?


Log scrapers are decoupled from applications. Just log to disk and let the agent of your logging stack pick it up and send to the central location.


That isn't an audit log.


Why not? The application logs who, when and what happened to disk. This is application specific audit events and such patches should be welcome upstream.

Log scraper takes care of long time storage, search and indexing. Because you want your audit logs stored in a central location eventually. This is not bound to the application and upstream shouldn’t be concerned with how one does this.


That is assuming the application is aware of “who” is doing it. I can commit to GitHub any name/email address I want, but only GitHub proxy servers know who actually sent the commit.


Thats a very specific property of git, stemming from its distributed nature. Allowing one to push the history of a repo fetched from elsewhere.

The receiver of the push is still considered an application server in this case. Whether or not GitHub solves this with a proxy or by reimplementing the git protocol and solve it in process is an internal detail on their end. GitHub is still “the application”. Other git forges do this type of auth in the same process without any proxies, Gitlab or Gerrit for example, open source and self hosted, making this easy to confirm.

In fact, for such a hypothetical proxy to be able to solve this scenario, the proxy must have an implementation of git itself. How else would it know how to extract the commiter email and cross check that it matches the logged in users email?

An application almost always has the best view of what a resource is, the permissions set on it and it almost always has awareness of “who” is acting upon said resource.


> Thats a very specific property of git, stemming from its distributed nature.

Not at all. For example, authentication by a proxy server is old-as-the-internet. There's a name for it, I think, "proxy authentication"?[1] I've def had to write support for it many times in the past. It was the way to do SSO for self-hosted apps before modern SSO.

> In fact, for such a hypothetical proxy to be able to solve this scenario, the proxy must have an implementation of git itself.

Ummm, have you ever done a `git clone` before? Do you note the two most common types of urls: https/ssh. Both of these are standard implementations. Logging the user that is authenticating is literally how they do rate limiting and audit logging. The actual git server doesn't need to know anything about the current user or whether or not they are authenticated at all.

1: https://www.oreilly.com/library/view/http-the-definitive/156...


Enough of shifting the goal posts. This was about applications doing their own audit logging, I still don’t understand what’s wrong with that. Not made up claims that applications or a git server doesn’t know who is acting upon it. Yes, a proxy may know “who” and can perform additional auth and logging at that level, but often has a much less granular view of “what”. In the case of git over http, I doubt nginx out of the box has any idea of what a branch or a commiter email is, at best you will only see a request to the repo name and git-upload-pack url.

Final food for the trolls. Sorry.


That's very cool; I didn't expect Garage to scale that well while being so young.

Are there other details you are willing/allowed to share, like the number of objects in the store and the number of servers you are balancing them on?


I used to use Manictime[1] to achieve something similar, although, that was for time tracking purposes. But I can admit I've used it multiple times to find websites and documents that I forgotten the name of using the screenshots. But in essence, It suffers from the same flaws as Recall.

[1] https://www.manictime.com


Looks useful, but what baffles me is.. Why is every framework setting state or their "signals" using "setX" functions? What's wrong with the built in getter and setters that you can either proxy or straight up override?

This feels arguably cleaner: something = "else";

Than: setSomething("else");


Some libraries that feature signals style reactivity do use getter and setters (ember.js, and mobx are 2 good examples). However it makes sense for the primitive API to use functions since getter and setters are functions under the hood and get applied to an object as part of a property descriptor. It's also not always desirable to have a reactive value embedded in an object, sometimes you just want to pass around a single changeable value.

As for why some libraries choose the `[thing, setThing] = signal()` API (like solid.js) that's often referred to as read write segregation. Which essentially encourages the practice of passing read only values by default, and opting in to allowing consumer writes on a value by explicitly passing it's setter function. This is something that was popularized by React hooks.

Either way this proposal isn't limiting the API choice of libraries since you can create whatever kind of wrappers around the primitive that you want.


    something = "else";
That is not observable for the primitive JS types that aren't objects and have no methods or properties or getters/setters (string, number, boolean, undefined, symbol, null).

    some.thing = "else";
The `some` can be proxied and observed. Most frameworks are setting up the `some` container/proxy/object so everything can be accessed and observed consistently. Whether the framework exposes the object, a function, or hides it away in a DSL depends on the implementation.


One big problem is right now they are _tremendously_ slow to use. (At least through the Proxy native class). Not sure if this is an artifact of JITs or the nature of prototypal inheritance.


Because javascript lacks scope-as-an-object. You can't track `var x; x = value` through it. `setSomething()` sends a notification after an assignment. Also, DOM elements can only take raw values and must be manually updated from your data flow.

Adding these features in-browser would seriously slow down DOM and JS and thus all websites for real. So instead we load megabytes of JS abstraction wrappers and run them in a browser to only simulate the effect.


For one I can write ".set" and the IDE would auto-complete with all possible somethings that can be set, even without having the slightest idea of which ones there are.

I've very much enjoyed this kind of consistency wherever is found (having a common prefix for common behaviors, in this case, setters)


well to use setters it has to be "foo.something = else", because JS can't override plain old local bindings -- not since "with" was sent to the cornfield anyway. Once you do that, you can indeed have a framework that generates getters and setters, which is exactly what Vue 2 does. Switch to proxies instead of get/set and you have Vue 3 -- the signals API is pretty much identical to the Vue composition API.


Someone at Azure thought of this[1]

[1] https://github.com/Azure/fetch-event-source


Having gone down the rabbit hole of creating printable media/pages from html/css, I also discovered paged media. Generating printable pdfs using only the standard is very cumbersome but thankfully pagedjs[1] exists, which makes this a breeze.

[1] https://github.com/pagedjs/pagedjs


After working with Keycloak for a couple of years I honestly got fed up with all it's quirks and started to look at alternatives, Authentik and many more looked promising but Zitadel[1] caught my eye and I've never looked back since.

[1] https://zitadel.com/blog/zitadel-vs-keycloak


Nice to hear. Glad you like it. What was the winning thing for you?


Netmaker[1] is another player in the space

[1] https://www.netmaker.io


Wow, every time I check in on it it feels like its 10x the product it was the 4-6 months prior. Impressive!


The downside of that is its instability. I enjoy it as a product - I was using it almost two years ago and it was good enough to set up a small scale mesh network at that point - but they basically were issuing breaking releases every month or two. Maybe now that they have a cloud offering it has stabilized.


> however beyond Windows I don't think it has neither an established presence nor ecosystem

I'd respectfully disagree with this statement based on my personal experience. Ever since .NET Core was introduced, I've noticed a significant shift in the hosting of ASP.NET apps. Many developers, including myself, now prefer hosting applications in containers on Linux systems rather than relying solely on Windows. This change reflects a broader trend among C# developers.

While I understand that my perspective might not encompass the entire developer community, I strongly believe that the adoption of Linux-based hosting for ASP.NET applications has grown considerably. It demonstrates the expanding reach and influence of .NET Core beyond the Windows ecosystem, proving its establishment in other platforms.

Please note that this is solely my viewpoint based on my experiences and interactions with other developers. Other opinions may vary, but I remain confident in the growing prominence of .NET Core outside of the traditional Windows environment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: