Hacker Newsnew | past | comments | ask | show | jobs | submit | shrimpx's commentslogin

“The Open in openAI means that [insert generic mission statement that applies to every business on the planet].”


To those who think this list will help them get into YC, or lament "why didn't I get into YC when my idea was squarely on this list":

The YC application is a sales pitch, and you're not selling your idea, you're primarily selling your charisma and capacity to spin vision and sell. Second, you're selling your chemistry with your cofounders and stability of your relationship. Third, you're selling your capacity to build, at least some usable prototype, but this a low bar.

At no point are you actually selling the concrete idea, unless you're doing something extremely specific that seems valuable and you're one of the few who can build it. For the rest, the idea is a rhetorical vehicle to sell the other things.


Yes, and quite often you are selling your company. Investors will try to push you out of the game if they see potential in your products. If you don't have a time-to-market-sensitive product you are usually better off grinding in the basement for a longer time than people wants you to believe.

If however, you have a very narrow window to be able to launch properly, or if your product is mostly visionary you need investors asap. Just don't expect to come out with a Zuckerberg deal, expect 1-2% of the company in the end and make sure you cash out during the entire road.


Spot on. Add in an ivy league or similar pedigree as social proof for a better chance.


Having met a bunch of YC companies now, I wouldn't say the Ivies are exactly under-represented but it always seems like there's more Stanford than UPenn and more UWaterloo than Cornell. If school means anything it's the quality of their CS programs.

Don't let your school hold you back from applying :)


Is UPenn particularly good? It’s the first time I hear about it in these discussions. Stanford on the other hand, I always hear about (isn’t that where Page and Brin were at?)


I think GP is saying that YC founders tend to be more techy, entrepreneurial types than Wharton MBA types. I'm not sure though, there's a lot left to the imagination.


going of a tangent but would you say it's worth going to a similarly ranked uni(oxford) if one wants to go into entrepreneurship given its more 'academic' emphasis? as opposed to somewhere like UCSD where it's not as prestigious but close to the tech scene.(bonus points for california weather ha!)


Definitely go to Oxford over UCSD. There’s a world of difference in difficulty to get into, and like the other person says, SD isn’t the biggest tech scene unless biotech is your focus. UC schools are also huge and impersonal, part of what lets them offer decently cheap tuition to in-state students; but if you’re not in-state, I don’t think you’re getting the best value for your money. My answer might change if the alternative were Berkeley.


UCSD is not really near the tech scene, so I wouldn't choose it for that. San Diego is a world away from the Bay Area. The weather in SD is definitely the best in the world, though.

The optimal path for someone in your position is to go to Oxford, then get a job/do a master's at Stanford.


I think this is an excellent point. That being said, there are some warm fuzzies that come by seeing my grad school research topic listed here as well (the ML for physics simulations topic). Just some validation that the area I'm spending time in could not only help my specific niche, but be broadly attractive to VC funding to help grow it. Not that I'm anywhere near being ready to build anything (or near graduation, for that matter) but the conversation about it being a possibility came up recently and maybe sometime in the far future it could be a reality.


We've never been accepted, but my experience with application process indicates this isn't strictly true. We're not getting any more charismatic, but they have continued to show interest in us applying. Not sure if that's what everyone gets or not.


They are never going to tell you to not apply. Why would they close that door?

There is no, no. They tell everyone "maybe next time."


Well there’s applying, and then there’s offering the interview.


VC notoriously rarely ever say "no"[1] - just in case.

1. https://news.ycombinator.com/item?id=38677251


Read the thread beyond the top-most comment. It's true that there's a long tail of wannabes who play this game, but reputable VCs don't mince words.


> reputable VCs

Nice oxymoron you got here.


VC's rhyme with feces, and share many attributes.


but if your product works you get more confident, no?


> The YC application is a sales pitch, and you're not selling your idea, you're primarily selling your charisma and capacity to spin vision and sell. Second, you're selling your chemistry with your cofounders and stability of your relationship. Third, you're selling your capacity to build, at least some usable prototype, but this a low bar.

YC is late to the early alpha and is betting on talent/leadership to soak up.

Then again, first movers don't always win.


> We've never been accepted, [...] they have continued to show interest in us applying

You're getting recurring meetings with them (which have significant opportunity cost for them), so it's not just the usual startup investor unwillingness to say "no" with finality?


From what I've heard, you also need to build up a lot of hype about yourself.


I don’t think this is true at all. I did YC and neither I nor my cofounders had any hype surrounding us or our idea. Unless if by hype you mean a handful of paying users then sure, that won’t hurt :)


Not true for me. No social media other than HN and got in first try.


You need to be the right idea at the right time, and the right team to do the thing.

Gen AI two years ago was a toy that fit "somewhere" into the "creator economy", which seemed dead.

Now try and you'll get a completely different reception.


"FCC announces that artificial voices are indeed artificial."


How does the cabin pressure hold it in place passively? Isn't the pressure on the inside higher than the pressure on the outside?


`xclip-mode` looks like it should definitely be included by default. `cua-mode` is tougher because it messes with the default keybindings, making you type C-x twice (or Shift-C-x) for the large number of keybindings that start with C-x. That might be better for newcomers though, and bring more people to Emacs. Personally I would disable `cua-mode` if it were default.


Probably not. It looks like an autocomplete engine. But technically you can do that with an LLM, with a more complex interface. You could select a region and then input a prompt "rewrite this code in xyz way". And a yet more complex system to split the GPT output across files, etc.


I think they mean making people's lives miserable enough that they quit. Presumably they can live with whatever Amazon throws at them, and keep taking a paycheck until fired.


> making people's lives miserable enough that they quit.

I am genuinely at awe at how out of touch people seem to be. We're talking about software engineers given nothing to do and paid hundreds of thousands of dollars... Yes this is not ideal but let's not act that they're torturing these people or forcing them to clean bathrooms or something.


You seem to be unaware of the mental and physical health toll these environments can take on people. Make money but become a burnt out, desiccated shell of a person.


Anthropic made $200M in 2023 and projected to make $1B in 2024. That's a laggard <2 year old startup. I don't think LLMs are a fad.


Apple has been looking sleepy on LLMs, but they've been consistently evolving their hardware+software AI stack, without much glitzy advertising. I think they could blow away Microsoft/OpenAI and Google, if suddenly a new iOS release makes the OpenAI/Bard chatbox look laughably antiquated. They're also a threat to Nvidia, if a significant swath of AI usage switches over to Apple hardware. Arm and TSMC would stand to win.


I doubt Apple’s going to make some big ChatGPT-style chatbot. They’re “just” going to use the same tech to drive iterative (good!) improvements to their products, like Siri and keyboard auto-complete.


Yeah. Siri supports text input already, anyway. Siri is their ChatGPT-style bot that's going to keep improving.


But does it even work sensibly, yet? Almost every time my partner asks Siri something, it works so badly that we end up asking Android/Google's Assistant, which responds well to most things.


What sort of things don’t work well? Phone actions or knowledge / info type questions?


I would challenge the keyboard autocomplete. I find the Apple suggestions to be frustratingly poor vs my experience on Android.


I thought it couldn't get any worse and then I upgraded to iOS 17. It's awful.


Out of curiosity have you experienced their autocorrect on iOS 17 because that’s when they updated to be LLM based?


I don't recall exactly when it started happening, but I've been having lots of issues with recent iOS versions rewriting not the last word entered, but the word before that. For example, if I start entering "I went to", it'll sometimes correct to "I want to", but it'll do that after I've typed the "to". I've found lots of similar examples. The retrospective nature of the edits mean I miss a lot of them and makes me appear a lot less literate than I am.


Same happens to me quite very often on a mobile, even here. But I use iPhone SE 1st Gen. with iOS 15.8.


Transformer based autocomplete on iOS 17 feels just as bad -- but in different ways -- as its previous incarnation to me.


Are you tapping the keys or swiping over those that make up the word you want to type? In my experience, tapping has always been and remained poor but swiping is getting better and better with every iOS version.


Swiping through keys doesn't have anything to do with autocomplete. Autocomplete has to do with predicting which word you're going to type next, not guessing which word best corresponds to the swipe you just made.


Those are very related tasks, you use results of the former to help you with the latter.


> Apple has been looking sleepy on LLMs, but they've been consistently evolving their hardware+software AI stack, without much glitzy advertising

They don't sell compute time to other companies to run AI, or massive custom hardware for AI training.

They aren't after VC funding.

Their core business isn't threatened by AI being "the evolution of search"

Product-wise, so far all you hear is messaging around things like pointing out the applicability of the M3 Max for running ML models.

Until they have real consumer products ready, they only need to keep tabs on analysts, with lip service at financial meetings.


Given Apple's track record on anything AI related and the terrible state they keep CoreML that not only seems extraordinarily unlikely, it would take a lot of time to win developer trust and that I just don't see happening.


Apple doesn’t have to win developer trust or build an AI platform. They just have to build a compelling consumer product that can only function with AI, and they are better equipped to do that than Google or Microsoft. It remains to be seen if OpenAI will go that route instead of a business built on training and providing access to foundational models.


Yes, this is the most important point and I think somehow least present in even discussions here: the technical question of who produces the best/cheapest LLM/future architecture is considerably less important than who, if anyone, creates a fundamentally new and dominant consumer experience built on AI. Most of the existing players (Google, Meta) would of course prefer that nobody produces such a newly dominant paradigm of computation for end-users since it would greatly reduce their relevance and subsequently revenues. Right now, ChatGPT is the only real contender in this space. However, I think you’re correct that it’s actually Apple who is most likely to be the next who attempts such a paradigm shift. Far too early to bet, but let’s say I wouldn’t be surprised if in five years we end up in a world in which Apple has the consumer monopoly and Microsoft the business monopoly, with Google and Meta falling into irrelevance.


I think Microsoft is going to eat openai, I mean the company is practically half in and out of Microsoft's mouth. Bing will likely add more and more features that are native to chatGPT, Google I think will eventually get in the game, Facebook is actually doing better than Google, especially for open source models which is buoying the smaller researchers and developers.

In the end one company will build AGI or super AGI that can do the function of any existing software even games, with any interface even VR, or no interface at all - just return deliverables like a tax return. The evolution might be, give me an easier but similar QuickBooks UI for accounting to just do my taxes, the company who gets here first could essentially put all other software companies out of business, especially SaaS businesses.

The first company to get there will basically be a corporate singularity and no other company will be able to catch up to them.


>They just have to build a compelling consumer product that can only function with AI

Yeah and I'm not talking exclusively about developer trust. Given Apple's current consumer lineup (see Siri, Apple photos, predictive text etc)... we only have evidence that they suck at ML. What makes you think they are going to suddenly transform overnight?


> see Siri, Apple photos, predictive text etc)... we only have evidence that they suck at ML

…or that they only deal with mature tech and not the shiny new thing. Makes sense to me. I don’t doubt everyone will have a personal LLM-based assistant in their phones soon, but with the current rate of improvements to LLMs and AI in general, I’d wait for at least a year more while doing R&D in-house if I were Apple.


You could use that apologist language for any company. If they suck at something just say they are “biding their time” No. Apple is just demonstrably behind.

Having terrible predictive text, voice to text, image classification etc isn’t just a quark of the way they do business. Those are problems with years of established work put into them and they just flat out aren’t keeping up.


“Apologists” … this is the domain of strategic analysis, business and products. Apple, Google, et al are not feudal lords or entities owed personal allegiance, nor sporting teams for fans to rally around, nor are we talking about morality and ethics where Apple did something wrong and apologists are justifying it.

As far as whether they are keeping up or not, I disagree, but neither of our opinions really matter unless we’re betting — that is, taking actions based on calculated risks we perceive.


You disagree based on what? On virtually every measure they are behind in AI, I can’t think of anywhere they are ahead, please enlighten me


… because Siri, predictive text, etc suck because it isn’t using an LLM. Alexa, and the Google Assistants from the same era all suck as well. I don’t see how evidence that Apple sucked with pre-LLM ML is an indicator that they will suck with integrating an LLM into their products.

No one said anything about transforming overnight.


I have enjoyed working with CoreML over the last few years. Please share what you didn’t like about it.


There are so many modern ML components that have terrible or no support in CoreML. Try to do any convolution other than conv2d, advanced or custom activation functions etc and you are out of luck. Exporting from PyTorch leads to all sorts of headaches with subtle behavior changes between implementations it is definitely a pain point for developers of widely used software


+1 thanks


Maybe MLX is meant to fill this gap?

https://github.com/ml-explore/mlx


Can you give an example? I switched to android because i use personal assistant a lot while driving and siri was absolutely horrible.


- FaceID

- Facial recognition in Photos

- "Memories" in Photos

- iOS keyboard autocomplete using LLMs. I am bilingual and noticed in the latest iOS it now does multi-language autocomplete and you no longer have to manually switch languages.

- Event detection for Calendar

- Depth Fusion in the iOS camera app, using ML to take crisper photos

- Probably others...

The crazy thing is most/all of these run on the device.


The iPhone's built in text OCR and image subject cutouts are also extremely good, just in the photos app.


Yeah totally, I copy text from images all the time.


The combination of automatic OCR and translation almost everywhere in the OS is great.


I just wish you could turn the multilingual keyboard off—I find that I usually only type in one language at a time and having the autocomplete recommend the wrong languages is quite frustrating


That's true, I have found that mildly annoying sometimes. But most of the time it's a win. It was really annoying manually switching modes over and over when typing in mixed-language, which I do fairly often. It'd be great if there was a setting though.


I had the opposite problem, the languages I usually typed in(Romanian + English) didn't have a multi language mode on iOS. So it was a constant pain to switch btw them when I needed to insert some English terms in Romanian sentences. IOS didn't support multi language for this language pair. On Android it always worked like a charm.


Hey I'm Romanian, too. The latest iOS does what you want -- it has multi-language support and typing mixed English + Romanian is seamless now. Yeah it was a total pain to keep switching languages before iOS 17.


To be honest I distrust Microsoft with swift key, but the it recognizes the change in language just smooth. I could switch languages in one sentence and it would understand what I am writing just fine, no Sill suggestions


Apple recommending wrong words when you write in mixed-language was the case in iOS 15, so much I always needed to manually change my keyboard language. But it’s no more in iOS 17. As an example I just typed this entire comment in Turkish keyboard with English autocorrect and suggestions working.

Maybe the (most likely) AI-based thing requires some training though. I got my new iPhone a month or so ago.


SwiftKey is great and if someone distrusts Microsoft for it...then fine, but various different companies "control" various different parts of my phone.

With an iPhone, it's only one company, that controls every little thing, and we have no insight into Apple at all. They can basically do whatever the hell they want.


Yeah face id is pretty good, no android phones seem to use the ir dot camera which makes me think Apple has a "patent" on it ...lame considering the dot projector is ripped straight out of kinekt.

Google does the memory photo thing too, but only if you use their app which I don't.

Android has had multi language keyboard support for the longest time, in addition to being able to install whatever keyboard I like (I use SwiftKey, it's brilliant) I can also install an llm based one as I please.

Android/my keyboard already does event detection/suggestion in text and has been doing for as long as I remember.

None of these are reasons to buy an iPhone...just reasons to buy a phone, lmao.


FYI Apple bought the company that made the kinect sensors. The Face ID module on iPhones is a mini kinect.


Are you so sure? Even this link is built on top of the work of others, I'm not sure they've contributed as much as you think they have.


I wouldn’t go too far. They didn’t even train this model on Apple hardware. Trained on Nvidia A100s


Don’t TSMC make Nvidia’s chips too?


Yup! TSMC wins either way.


Personal ML systems running on hardware you own is the killer app. If these are "good enough" they'll be significantly preferable to using large subscription-based models, where those companies could pull a Lucy any day.


generic first-order shallow argument


You're suggesting that Apple could fit what can't be done with a 4090 into a laptop?

Color me doubtful.


But Apple will just make a magical chip that's different to regular hardware cause they're the best company and invent all the things even if they've been seen before Apple still invented it first, just wait until their Super Unicorn Ultra™ chip comes out with Hyperdrive Retinated LLM™ support, they don't name normal hardware different just for marketing...it's really unique, new and inventive hardware that we're happy to pay a huge premium for because it's so advanced and inventive.


Looks like clicking a reference adds the hash to the URL but doesn't scroll to the reference. If you load the hash URL directly in the browser you get a 404 page...



Yeah, it seems like a bug in HTML generator...


It is a bug. Will be fixed soon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: