Hacker Newsnew | past | comments | ask | show | jobs | submit | seanwilson's commentslogin

> You are consuming it in a condensed secondary form (one tropic level up).

I always find this is looked over and a double standard. You can raise an animal on a diet of anything along with medication, drugs, and supplements, and advocates will label the beef/chicken/pork product as "meat" and "natural" as if it was a single pure ingredient. But then if a non-meat alternative like a burger is mentioned, every individual ingredient used gets scrutinized, even if that ingredient is often fed to farm animals like soy or grain.


Any interesting work on using LLMs to moderate posts/users? HN is often said to be different because of its moderation, couldn't you train an LLM moderator on similar rules to reduce trolls, ragebait, and low effort posts at scale?

A big problem I see is users in good faith are unable to hold back from replying to bad faith posts, a failure to follow the old "don't feed the trolls rule".


I won't be surprised when LLMs get good at puzzle-heavy text adventures if there was more attention turned to this.

I've found for text adventures based on item manipulation, variations of the same puzzles appear again and again because there's a limit to how many obscure but not too obscure item puzzles you can come up with, so training would be good for exact matches of the same puzzle, and variations, like different ways of opening locked doors.

Puzzles like key + door, crowbar + panel, dog + food, coin + vending machine, vampire + garlic etc. You can obscure or layer puzzles, like changing the garlic into garlic bread which would still work on the vampire, so there's a logical connections to make but often nothing too crazy.

A lot of the difficulty in these games comes from not noticing or forgetting about clues/hints and potential puzzles because there's so much going on, which is less likely to trip up a computer.

You can already ask LLMs "in a game: 20 ways to open a door if I don't have the key", "how to get past an angry guard dog" or "I'm carrying X, Y, and Z, how do I open a door", and it'll list lots of ways that are seen in games, so it's going to be good at matching that with the current list of objects you're carrying, items in the world, and so on.

Another comment mentions about how the AI needs a world model that's transforming as actions are performed, but you need something similar to reason about maths proofs and code, where you have to keep track of the current state/context. And most adventure games don't require you to plan many steps in advance anyway. They're often about figuring out which item to combine/use with which other item next (where only one combination works), and navigating to the room that contains the latter item first.

So it feels like most of the parts are already there to me, and it's more about getting the right prompts and presenting the world in the right format e.g. maintaining a table of items, clues, and open puzzles, to look for connections and matches, and maintaining a map.

Getting LLMs to get good at variations of The Witness would be interesting, where the rules have to be learned through trial and error, and combined.


Doesn't it kind of defeat the purpose, though?

If you have to train the AIs on every specialized new problem, and then you have to babysit them as you apply them to similar problems, why even bother?

It's not really intelligent in any real sense.


Automation can be useful and valuable (economically) even if not intelligent. Heck from a big picture view of solving a problem (say to manufacture something), then a solution/process/workflow etc that requires less intelligence may be the preferable one - if such a solution can be found, that is. It can be expected to be cheaper, more robust, repeatable.

I'm iterating on my accessible color palette designer:

https://www.inclusivecolors.com/

I'm working on an article/tutorial about how to develop an intuition behind the WCAG contrast rules so you can spot bad contrast problems yourself instead of getting surprised (and irritated) by contrast checker tools giving you failures later. And how to solve color contrast problems like what design options you have when two colors don't contrast.

After speaking to other designers, I find most want to follow the WCAG rules but get stuck going in circles not knowing how to fix color contrast issues e.g. when changing one color throws another one off, and how to work with brand/primary colors that lack contrast (like orange/yellow on white).

Since working on my palette designer, I developed a lot of intuition in this area so wanted to share what I learned. There's lots of articles on WCAG rules, but many will say "pick accessible colors" without really helping you understand how, especially when you have to stick to certain brand colors.


> What animal is featured on a flag of a country where the first small British colony was established in the same year that Sweden's King Gustav IV Adolf declared war on France? ... My point is that if all knowledge were stored in a structured way with rich semantic linking, then very primitive natural language processing algorithms could parse question like the example at the beginning of the article, and could find the answer using orders of magnitude fewer computational resources.

So as well as people writing posts in English, they would need to provide semantic markup for all the information like dates, flags, animals, people, and countries? It's difficult enough to get people to use basic HTML tags and accessible markup properly, so what was the plan for how this would scale, specifically to non-techy people creating content?


So wikipedia and wikidata?

This actually happened already and it's part of why llms are so smart, I haven't tested this but I venture a guess that without wikipedia and wikidata and wikipedia clones and stolen articles, LLMs would be quite a lot dumber. You can only get so far with reddit articles and embedded knowledge of basic info on higher order articles.

My guess is when fine tuning and modifying weights, the lowest hanging fruit is to overweigh wikipedia sources and reduce the weight of sources like reddit.


Only a relatively small part of Wikipedia has semantic markup though? Like if the article says "_Bob_ was born in _France_ in 1950" where the underlines are Wikpedia links, you'll get some semantic info from the use of links (Bob is a person, France is a country), but you'd be missing the "born" relationship and "1950" date as these are still only raw text.

Same with the rest of articles with much more complex relationships that would probably be daunting even for experts to markup in an objective and unambiguous way.

I can see how the semantic web might work for products and services like ordering food and booking flights, but not for more complex information like the above, or how semantic markup is going to get added books, research articles, news stories etc. that are always coming out.


The semantic information is first present not in markup but in natural language.

But it is also present inside the website, there's infoboxes that mark the type of object, place, person, theory.

Additionally infoboxes also hold relationships, you might find when a person was born in an infobox, or where they studied.


> The semantic information is first present not in markup but in natural language.

Accurate natural language processing is a very hard problem though and is best processed by AI/LLMs today, but this goes against what the article was going for when it's saying we shouldn't need AI if the semantic web had been done properly?

For example, https://en.wikipedia.org/wiki/Resource_Description_Framework and https://en.wikipedia.org/wiki/Web_Ontology_Language are some markup approaches related to the semantic web.

Complex NLP is the opposite to what the semantic web was advocating? Imagine asking the computer to buy a certain product and it orders the wrong thing because the natural language parsed was ambiguous.

> Additionally infoboxes also hold relationships, you might find when a person was born in an infobox, or where they studied.

That's not a lot of semantic information compared to the contents of a Wikipedia article that's several pages long. Imagine a version of Wikipedia that only included the infoboxes and links within them.


Yeah. Wikidata

Slightly related: I die a little inside each time I see `JSON.parse(JSON.stringify(object))` thinking about the inefficiencies involved compared to how you'd do this in a more efficient language.

There's structuredClone (https://developer.mozilla.org/en-US/docs/Web/API/Window/stru... https://caniuse.com/?search=structuredClone) with baseline support (93% of users), but it doesn't work if fields contain DOM objects or functions meaning you might have to iterate over and preprocess objects before cloning so more error-prone, manual and again inefficient?


March 2022 is not that long ago for a codebase. It takes time but Javascript has come a long way and it's definitely going in the right direction.


also depends on the codebase, if you use frameworks with deep reactivity like Vue, you can't do structuredClone without toRaw (which only works if the object is shallow) as it'd throw on proxy objects.

Svelte has `$state.snapshot()` for this reason I believe.


"The International Mathematical Olympiad (IMO) is the World Championship Mathematics Competition for High School students", so not to undermine it but it's below university or graduate level.

Research level mathematics like this is as hard as it gets, and this proof is famously difficult: uses many branches of advanced mathematics, required thousands of pages of proofs, years of work.


Yes but the hard work (coming up with a human-readable proof) has already been done.


Human readable (informal) proofs are full of gaps that all have to be traced back to axioms e.g. gaps that rely on shared intuition, background knowledge and other informal proofs.

It's somewhat like taking rough pseudo code (the informal proof, a mixture of maths and English) and translating that into a bullet-proof production app (the formal proof, in Lean), where you're going to have to specify every step precisely traced back to axioms, handle all the edge causes, fix incorrect assumptions, and fill in the missing parts that were assumed to be straightforward but might not be.

A major part is you also have to formalise all the proofs your informal proof relied on so everything is traced back to the initial axioms e.g. you can't just cite Pythagorus theorem, you have to formalise that too.

So it's an order of magnitude more difficult to write a formal proof compared to an informal one, and even when you have the informal proof it can teams many years of effort.


I’m almost certain this is ignorance on my part, but it seems like this would mean the proof is… possibly wrong? I mean if there are gaps and other informal proofs in there?

But I thought it was a widely celebrated result.


Kevin Buzzard told me that the worry that it might in fact be wrong is a huge motivator for him. I also once asked Serge Lang why so much mathematics is correct (which surprised me coming from programming where everything has bugs), and he said; “people do a large number of consistency checks beyond what is in the published proofs, which makes the chances the claimed results are correct much, much higher.” Another related quote Bryan Birch told me once: “it is always a good idea to prove true theorems.”


I suspect he is massively overestimating the reliability of obscure mathematics.


> Kevin Buzzard told me that the worry that it might in fact be wrong is a huge motivator for him.

And then, when I raised concerns in Zulip about Lean's metaprogramming facilities being used to trick the pipeline into accepting false proofs, he said the opposite. He even emphasized that the formalization efforts are not for checking proof correctness, but for cataloguing truths we believe in.

This kind of equivocation turned me away from that community, to be honest. That was an extremely frustrating experience.


For what it's worth, I don't think that Kevin Buzzard is the person you should talk to if you are interested in proof assistant design. As far as I know, Buzzard does not consider himself to be an expert in type theory or in proof assistants, and claims to be a mere user.


> “people do a large number of consistency checks beyond what is in the published proofs, which makes the chances the claimed results are correct much, much higher.”

I imagine one bias is because formal verification is such a huge effort, you're only going to do it for really interesting and impactful proofs, which means the proofs that get formal verified will already have been reviewed and critiqued a lot, and will be less likely to have glaring critical errors.


> I’m almost certain this is ignorance on my part, but it seems like this would mean the proof is… possibly wrong?

That's part of the motivation to formalise it. When a proof gets really long and complex, relies on lots of other complex proofs, and there's barely any single expert who has enough knowledge to understand all the branches of maths it covers, there's more chance there's a mistake.

There's a few examples here of errors/gaps/flaws found while formalizing proofs that were patched up:

https://mathoverflow.net/questions/291158/proofs-shown-to-be...

My understanding is it's common to find a few problems that then need to be patched or worked around. It's a little like wanting to know if a huge codebase is bug-free or not: you might find some bugs if you formalized the code, but you can probably fix the bugs during the process because it's generally correct. There can be cases where it's not fixable though.


I think it was Terrance Tao on the Lex Friedman podcast recently that said that there are very often little mistakes in big proofs, but they are almost always able to be patched around. Its like mathematicians' intuition is tracking some underlying reality and the actual formalization is flexible. Yes sometimes digging down into a small mistake leads to an unbridgeable gap and that route has to be abandoned, but uncannily often such issues have nearby solutions.


Also a lot of errors would be called "typos", not errors. Such as some edge cases missing in the theorem statement which technically makes the theorem false. As long as there's a similar theorem in the same spirit that can be proven, that's what the original was all along.


The level of rigor used in math if sometimes characterized as "sufficient to convince other mathematicians of correctness." So, yeah possibly, but not in a willy nilly way. It's not a proof sketch, it's a proof. It just isn't written in human language designed for communication.


Yes and when it was first published it was wrong (made leap of logic).

It takes thorough review by advanced mathematicians to verify correctness.

This is not unlike a code review.

Most people vastly underestimate how complex and esoteric modern research mathematics are.


The thing is though that MANY bugs slip through even the most thorough code reviews. As a security researcher, I can tell you there is literally no system out there that doesn't have such a bug in it.

The systems we deal with in software are massive compared with your typical mathematical framework though. But FLT is probably on similar scope.


> It's somewhat like taking rough pseudo code and translating that into a bullet-proof production app

That's actually where LLMs are already quite good at.


/s


I have little knowledge in this area, but my understanding is it's like this:

There is a pseudocode app that depends on a bunch of pseudocode libraries. They want to translate that pseudocode app into a real runnable app. They can do that, and it's a good amount of work, but reasonable. The problem is to get the app to run they also need to translate the hundreds or thousands of pseudocode libraries into actual libraries. Everything from OS APIs, networking libs, rendering libs, language standard libs all need ro be converted from specs and pseudocode to real code to actually run the app. And that's a ton of work.


No, some of the harder work has been done. Translating human-readable proofs into machine-readable ones is also very hard work and an area of active research.


This is a nice overview of what this is, why they're doing it and why it's many years of work:

https://github.com/ImperialCollegeLondon/FLT/blob/main/GENER...


Link added to top text. Thanks!


> The trap is that both OOP hierarchies and FP "make illegal states unrepresentable" create premature crystallization of domain understanding into rigid technical models. When domains evolve (and they always do), this coupling demands expensive refactoring.

At least when you refactor your types, the compiler is going to pinpoint every line of code where you now have missing pattern checks, unhandled nulls, not enough parameters, type mismatches etc.

I find refactoring in languages like Python/JavaScript/PHP terrifying because of the lack of this and it makes me much less likely to refactor.

Even with a test suite (which you should have even when using types), it's not going to exhaustively catch problems the type system could catch (maybe you can trudge through several null errors your tests triggered but there could be many more lurking), working backwards to figure out what caused each runtime test error is ad-hoc and draining (like tracing back where a variable value came and why it was unexpectedly null), and having to write + refactor extra tests to make up for the lack of types is a maintenance burden.

Also, most test suites I see do not contain type related tests like sending the wrong types to function parameters because it's so tedious and verbose to do this for every function and parameter, which is a massive test coverage hole. This is especially true for nested data structures that contain a mixture of types, arrays, and optional fields.

I feel like I'm never going to understand how some people are happy with a test suite and figuring out runtime errors over a magic tool that says "without even running any parameters through this function, this line can have an unhandled null error you should fix". How could you not want that and the peace of mind that comes with it?


> it's not going to exhaustively catch problems the type system could catch

Unless you are using a formal proof language, you're going to have that problem anyway. It's always humorous when you read comments like these and you find out they are using Rust or something similar with a half-assed type system.


I caveated you should have a test suite anyway (i.e. because types aren't going to catch everything), and the above was suppose to be a caveat to mean "for the behaviours the type system you have available can catch".

Obviously mainstream statically typed languages can't formally verify all complex app behaviour. My frustration is more aimed at having time and energy wasted from runtime and test suite errors that can be easily caught with a basic type system with minimal effort e.g. null checks, function parameters are correct type.

Formal proof languages are a long way from being practical for regular apps, and require massive effort for diminishing returns, so we have to be practical to plug some of this gap with test cases and good enough type systems.


> e.g. null checks, function parameters are correct type.

Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic. This isn't a real problem.

What you might be trying to suggest, though, is that half-assed type systems are easier to understand for average developers, so they are more likely to use them correctly and thus feel the benefit from that? It is true that in order to write good tests you need to share a formal proof-esq mindset, and thus they are nearly as burdensome to write as using a formal proof language. In practice, a lot of developers don't grasp that and end up writing tests that serve no purpose. That is a good point.


> Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic. This isn't a real problem.

I just don't find this in practice. For example, I've worked in multiple large Python projects with lots of test cases, and nobody is making the effort to check what happens when you pass incorrect types, badly formed input, and null values in different permutations to each function because it's too much effort and tedious. Most tests are happy path tests, a few error handling tests if you're lucky, for a few example values that are going to miss a lot of edges.

And let's be honest, it's common for parts of the code to have no tests at all because the deadline was too tight or it's deemed not important.

If you have a type system that lets you capture properties like "this parameter should not be null", why would you not leverage this? It's so easily in the sweet spot for me of minimal effort, high reward e.g. eliminates null errors, makes refactoring easier later, that I don't want to use languages that expect me to write test cases for this.

> half-assed type systems are easier to understand for average developers

Not sure why you call them that. Language designers are always trying find a sweet spot with their type systems, in terms of how hard it is to use and what payback you get. For example, once you try to capture even basic properties about e.g. the size/length of collections in the types, the burden on the dev gets unreasonable high very quickly (like requiring devs to write proofs). It's a choice to make them less powerful.


> Most tests are happy path tests, a few error handling tests if you're lucky, for a few example values that are going to miss a lot of edges. And let's be honest, it's common for parts of the code to have no tests at all...

This seems like a roundabout way of confirming that what you are actually saying is that half-assed type systems are much easier to grasp for average developers, and thus they find them to be beneficial because being able to grasp it means they are able to use it correctly. You are absolutely right that most tests that get written (if they get written!) in the real world are essentially useless. Good tests require a mindset much like formal proofs, which, like writing true formal proofs, is really hard. I did already agree that this was a good point.

> Not sure why you call them that.

Why not? It gets the idea across enough, while being sufficiently off brand that it gets those who aren't here for the right reasons panties in a knot. Look, you don't have to sell me on static typing, even where not complete. I understand the benefits and bask in those benefits in my own code. But they are also completely oversold by hyper-emotional people who can't discern between true technical merit and their arbitrary feelings. Using such a term reveals where one is coming from. Those interested in the technical merit couldn't care less about what you call it. If someone reacts to the term, you know they aren't here in good faith and everything they say can be ignored.


as someone who has actually written stuff in non-"half-assed type systems". It's really not about understanding. Even if you understand, it's a HUGE pain to write things in them. It can be worth it if you are extremely high assurance but in general it's just not worth it in most software.

Dynamic typing is on the other end of the spectrum. That is a huge pain precisely because there are no automated checks.

In between those two extremes there is an (subjective)sweet spot. Where you don't pay much at all in terms of overhead, but you get back a ton from the checks it provides.


> This seems like a roundabout way of confirming that what you are actually saying is that half-assed type systems are much easier to grasp for average developers

To clarify, I think formal verification languages are too advanced for almost everyone and overkill for almost every mainstream app. And type systems like we have in Rust, TypeScript and OCaml seem a reasonable effort/reward sweet spot for all levels of developer and most projects.

What's your ideal set up then? What type system complexity (or maybe language)? How extensive should the test suite be? What categories of errors should be left to the type system and which ones for the test suite?


> Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic.

That's not true. At no point in testing `fn add(a: i32, b: i32) -> i32` am I going to call `add("a", "b")` or `add(2, None)`. Rust even won't permit me to try. In a language with a more permissive type system, I would have to add additional tests to check cases where parameters are null or of the wrong type.


> At no point in testing `fn add(a: i32, b: i32) -> i32` am I going to call `add("a", "b")` or `add(2, None)`.

It seems you either don't understand the topic of discussion or don't understand testing (see previous comment). If the user of your function calls it in undocumented ways, that's their problem, not yours. That is for their tests to reason with.

Passing the wrong types is only your problem for the functions you call. Continuing with your example, consider that you accidentally wrote (where the compiler doesn't apply type checking):

    fn double(a: i32) -> i32 {
        add(a, None) // Should have been add(a, a)
    }
How do you think you are going to miss that in your tests, exactly?


What your tests will miss is the invalid data that was constructed in one corner of the codebase

    // apply special discount
    newPaymentInfo = {value: oldPaymentInfo.value / 2}
    newPaymentInfo[tax] = applyRegionalTax(newPaymentInfo.value)
    return newPaymentInfo
which only gets parsed by another corner of the codebase at runtime.

    // apply tax to tips only in some regions
    if (taxableTips) {
        paymentInfo.tip += applyRegionalTax(paymentInfo.value)
        // ERROR: tip is undefined (instead of zero)
    }
You can't validate everything all the time, and if you try, it's easy for that validation to fall out of sync with the actual demands of the underlying logic. Errors like this crop up easily while refactoring. That's why one of the touted benefits of Rust's type system is "fearless refactoring."


Real functions are tens of lines or more long, have complex inputs, multiple branches, and call other complex functions, so tests that try a few inputs and are only checking for a few behaviours aren't going to catch everything.

If it's practical to get a static type system to exhaustively check a property for you (like null checks), it's reckless in my opinion to rely on a test suite for that.

> If the user of your function calls it in undocumented ways, that's their problem, not yours.

Sounds reckless to me as well because you should assume functions have bugs and will also be passed bad inputs. If a bug makes a function return a bad output, and that gets passed to another function in a way that gives "undocumented" behaviour, I'd much prefer to code to fail or not compile at all, because when this gets missed in tests it'll eventually trigger on production.

I view it like the Swiss cheese model (mentioned elsewhere), where you try to catch bugs at a type checking layer, a test suite layer, code review, manual QA, runtime monitoring etc. and you should assume flaws at all layers. I see no good reason to skip the type checking layer.


If you need to test for null checks and function parameter types, then your dismissal of "half-assed" type systems is severely misplaced. Everyone [1] agrees that testing null checks is a huge waste of time.

[1] https://jspecify.dev/about/


There’s a difference between “there’s some gaps” and “you can drive a bus through it.”

Lots of languages other than Rust have static types, some more complete than others.


Trouble is that if there are gaps then the types become redundant. Consider error handling. Rust helps ensure you handle the error, but it doesn't ensure you handle the error correctly. For that, you must write tests. But once you've written tests to prove that you've handled the error correctly, you've also proven that you handled the error, so you didn't really need the type to begin with. You're really no better off than someone using PHP or JavaScript.


> But once you've written tests to prove that you've handled the error correctly, you've also proven that you handled the error

Hardly. Suppose all caught errors in a particular module of code bubble up to a call site which (say) retries with exponential back-off. If the compiler can guarantee that I handle every error, I only need one test that checks whether the exponential back-off logic works. With no error handling guarantee, I'd need to test that every error case is correctly caught—otherwise my output might be corrupted.


The chief reason software fails is because programmers are insufficiently aware of all of the reasons the software can fail. Sure you need a test to make sure that you handle the error correctly, but if the function signature doesn't indicate the possibility of an error occurring, why would you write that test in the first place?


By the same reasoning, if the function signature doesn't indicate what specific errors can occur, why would you write a test in the first place?

No matter how you slice it you have to figure out what the software you are calling upon does and how it is intended to function. Which is, too, why you are writing tests: So that your users have documentation to learn that information from. That is what tests are for. That is what testing is all about! That it is also executable is merely to prove that what is documented is true.


> if the function signature doesn't indicate what specific errors can occur, why would you write a test in the first place?

All languages with exceptions enter the chat


All languages with exceptions...?

Let us introduce you to the concept of checked exceptions. That is one of the few paradigms we've seen in actually-used languages (namely Java) where communicating which specific errors will occur has been tried.

Why is it that developer brains shut off as soon as they see the word "error"? It happens every time without fail.


I'm aware of checked exceptions in Java. What I'm not aware of is a language which has checked exceptions as the only exception mechanism, which would be the only way to have exceptions always reflected in the function definition.


Types require opt-out and automatically extrapolate onto new code. Tests require opt-in.


You're right, but kind of missing the way risk works. Normally you want something like a swiss cheese model[0] where different layers reduce the likelihood of issues.

Snubbing type systems because they aren't 100% failproof misses that point.

[0] https://en.m.wikipedia.org/wiki/Swiss_cheese_model


SEEKING WORK | UX/UI & web design

Portfolio: https://seanw.org/

Live example projects: https://checkbot.io/ https://inclusivecolors.com/

Location: Edinburgh, UK and remote (I’m used to time zone differences and async work)

---

I help startups with the UX/UI and web design of their products. This includes web apps, websites, landing pages, copywriting, and I can assist with frontend development where needed. My background of launching my own products and being a full stack developer helps me create practical designs that balance usability, aesthetics, development effort, and performance. I work to fixed price quotes for self-contained projects.

---

The best live example of my work is Checkbot (https://checkbot.io/), a browser extension that tests websites for SEO/speed/security problems. The entire project is my own work including coding the extension itself, UX/UI design, website design (the homepage is optimised to load in 0.7 seconds, 0.3MB data transferred), marketing, website copy, and website articles on web best practices.

[ Rated 4.9/5, 80K+ active users, 100s of paying subscribers ]

---

I have 10+ years of experience, including a PhD in software verification and 5+ years working for myself helping over 25 companies including Just Eat, Triumph Motorcycles and Fogbender (YC W22). See my website for testimonials, portfolio and more: https://seanw.org

Skills: Figma, Sketch, TypeScript, JavaScript, Vue, Hugo, Jekyll, WordPress, Django, HTML/CSS, Bootstrap, Tailwind, OCaml, Java, Python, C, analytics, WCAG accessibility, website SEO/speed optimisation.

Note: For large projects, my partner usually assists me in the background (I’m working on starting a design studio with her in the future)

---

Email sw@seanw.org with a short description of 1) your project 2) how you think I can help 3) the business outcome you’re looking for and 4) any deadlines. I can get back to you in one working day to arrange a call to discuss a quote and how we can work together!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: