For me it’s up and running.
I was doing some work with AI Studio when it was released and reran a few prompts already. Interesting also that you can now set thinking level low or high. I hope it does something, in 2.5 increasing maximum thought tokens never made it think more
you can bring your google api key to try it out, and google used to give $300 free when signing up for billing and creating a key.
when i signed up for billing via cloud console and entered my credit card, i got $300 "free credits".
i haven't thrown a difficult problem at gemini 3 pro it yet, but i'm sure i got to see it in some of the A/B tests in aistudio for a while. i could not tell which model was clearly better, one was always more succinct and i liked its "style" but they usually offered about the same solution.
Oof. So much opportunistic grandstanding and virtue signaling in the comments there. I read for 5 minutes and didn't find even a single comment that expressed any uncertainty about the truth or accuracy of the allegations.
In general I think this was quite a reasonable comment section. I see a lot of "damn this sounds awful" (and it does), discussion about the general phenomenon of sexual harassment (which is obviously real) rather than that specific case, and some uncertainty about what actually happened. I don't see much "this guy should be jailed immediately" in the top comments. I certainly wouldn't call it a mob and I don't see anything that deserves to labelled as insincere virtue signaling.
A lot of works of fiction sound credible. Are you going to believe those?
You don't have all the information. You weren't there. You don't even know the people personally. You are not in a position to make any judgement either way.
Something sounding credible doesn't make it true. It doesn't automatically make it false, either. You don't have to believe the accuser or the accused. The only thing any of us should do is mind our own business.
You can have whatever opinion you want, but don't confuse "sounds credible" with evidence. From the sidelines, you don't know enough to judge either way. Saying "I don't know" is the only accurate position. Everything beyond that is just speculation - and speculation is exactly what keeps cancel culture alive.
Ultimately, this reflects just terribly on the Scala community and every individual who signed the open letter, including Brian Clapper himself and over 300 others. You can read the full list of names here: https://scala-open-letter.github.io/
Having been in a similar situation myself as a teenager, it is truly abhorrent how quickly people are willing to jump to conclusions against someone based on the most limited information, and without giving the accused any chance to tell their side of the story or defend themselves. Not even a single one of my so-called friends asked me what happened, and almost all of them disappeared from my life permanently.
What I learned from the experience was that none of the people who jumped on the cancel bandwagon had ever been worth even a second of my time. It was their loss, and I became much more careful about who I choose as friends after that.
I can certainly say that if I encounter any of the 300+ individuals listed in the letter in my personal or professional lives, I will be giving them a very wide berth indeed.
> Ultimately, this reflects just terribly on the Scala community
Maybe the lesson of this is that people should be cautious about getting involved in communities like this to the extent that being cancelled by the community does you this much damage.
The logical consequence of this would be that all it takes to destroy someone's reputation is collusion between just two people who decide to make false allegations against someone. That is, frankly, ridiculous. Inadequacy of the justice system and the difficulty of prosecuting cases where there is a lack of (or in this case, no) evidence, doesn't justify abrogating the principle of "innocent until proven guilty."
This is useful, and directly contradicts the terms and conditions for Gemini CLI (edit: if you use the personal account, then its governed under the Code Assist T&C). I wonder which one is true?
If you're using Gemini CLI through your personal Google account, then you are using Gemini Code Assist license and need to follow the T&C for that. Very confusing.
Collection means it gets sent to a server, logging implies (permanent or temporary) retention of that data. I tried finding a specific line or context in their privacy policy to link to but maybe someone else can help me provide a good reference. Logging is a form of collection but not everything collected is logged unless mentioned as such.
They really need to provide some clarity on the terms around data retention and training, for users who access Gemini CLI free via sign-in to a personal Google account. It's not clear whether the Gemini Code Assist terms are relevant, or indeed which of the three sets of terms they link at the bottom of the README.md apply here.
Thank you, this is helpful, though I am left somewhat confused as a "1. Login with Google" user.
* The first section states "Privacy Notice: The collection and use of your data are described in the Gemini Code Assist Privacy Notice for Individuals." That in turn states "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals.". That page says to use the VS Code Extension to change some toggle, but I don't have that extension. It states the extension will open "a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models." I can't find this page.
* Then later we have this FAQ: "1. Is my code, including prompts and answers, used to train Google's models? This depends entirely on the type of auth method you use. Auth method 1: Yes. When you use your personal Google account, the Gemini Code Assist Privacy Notice for Individuals applies. Under this notice, your prompts, answers, and related code are collected and may be used to improve Google's products, which includes model training." This implies Login with Google users have no way to opt out of having their code used to train Google's models.
* But then in the final section we have: "The "Usage Statistics" setting is the single control for all optional data collection in the Gemini CLI. The data it collects depends on your account type: Auth method 1: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and your prompts and answers for model improvement." This implies prompts and answers for model improvement are considered part of "Usage Statistics", and that "You can disable Usage Statistics for any account type by following the instructions in the Usage Statistics Configuration documentation."
So these three sections appear contradictory, and I'm left puzzled and confused. It's a poor experience compared to competitors like GitHub Copilot, which make opting out of model training simple and easy via a simple checkbox in the GitHub Settings page - or Claude Code, where Anthropic has a policy that code will never be used for training unless the user specifically opts in, e.g. via the reporting mechanism.
I'm sure it's a great product - but this is, for me, a major barrier to adoption for anything serious.
Does anyone know what Google's policy on retention and training use will be when using the free version by signing in with a personal Google account? Like many others, I don't want my proprietary codebase stored permanently on Google servers or used to train their models.
At the bottom of README.md, they state:
"This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
* Gemini API key
* Gemini Code Assist
* Vertex AI"
The Gemini API terms state: "for Unpaid Services, all content and responses is retained, subject to human review, and used for training".
The Gemini Code Assist terms trifurcate for individuals, Standard / Enterprise, and Cloud Code (presumably not relevant).
* For individuals: "When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies."
* For Standard and Enterprise: "To help protect the privacy of your data, Gemini Code Assist Standard and Enterprise conform to Google's privacy commitment with generative AI technologies. This commitment includes items such as the following: Google doesn't use your data to train our models without your permission."
The Vertex AI terms state "Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction."
What a confusing array of offerings and terms! I am left without certainty as to the answer to my original question. When using the free version by signing in with a personal Google account, which doesn't require a Gemini API key and isn't Gemini Code Assist or Vertex AI, it's not clear which access mechanism I am using or which terms apply.
It's also disappointing "Google's privacy commitment with generative AI technologies" which promises that "Google doesn't use your data to train our models without your permission" doesn't seem to apply to individuals.
Pyright + Pylance + Ruff has been rock solid for me on my 100Kloc codebase for more than a year now. I use the VS Code extensions, and Pyright and Ruff are integrated into my pre-commit.
Ty: 2.5 seconds, 1599 diagnostics, almost all of which are false positives
Pyright: 13.6 seconds, 10 errors, all of which are actually real errors
There's plenty of potential here, but Ty's type inference is just not as sophisticated as Pyright's at this time. That's not surprising given it hasn't even been released yet.
Whether Ty will still perform so much faster once all of Pyright's type inference abilities have been matched or implemented - well, that remains to be seen.
Pyright runs on Node, so I would expect it to be a little slower than Ty, but perhaps not by very much, since modern JS engines are already quite fast and perform within a factor of ~2-3x of Rust. That said, I'm rooting for Ty here, since even a 2-3x performance boost would be useful.
"Failed to generate content, quota exceeded: you have reached the limit of requests today for this model. Please try again tomorrow."
"You've reached your rate limit. Please try again later."
Update: as of 3:33 PM UTC, Tuesday, November 18, 2025, it seems to be enabled.