If you can't prove a theorem, then it's not a theorem, it's a conjecture. Conjectures are not as useful. For example, the Shannon-Hartley theorem tells us under what circumstances it is possible to decode a radio signal. That theorem lets us design, e.g., cell phone networks.
Theorems are useful in all sorts of unexpected ways. At the turn of the 20th century group theory was considered a worthless corner of mathematics, but understanding group theory allows modern chemical instrumentation to work.
edit:
> It seems like it's the full employment act for pencil pushers
> downvoters downvote instead of providing refutation. A lot in common with fundamental religionists
You've proven the downvoters right by piling one ad hominem attack upon another.
The example you've described (radio signals) is just a method of winning an argument. It doesn't actually help you, you practically do it by writing a computer program that tries out different parameters to get the right one.
Let's say you need a communications channel with a 300 Mbit/s capacity. Shannon's theorem lets you know what kind of parameters make that possible -- namely, bandwidth and signal-to-noise ratio. From there, I can make an informed decision about the entire signal chain. Once I choose the bandwidth, the theorem tells me the SNR so I can assign a noise budget to each of the components. If we assume that the noise is Gaussian, we can calculate total noise level as the root sum of squares of the noise levels of the individual components (don't forget to multiply by the gain). I can look at an amplifier IC's spec sheet and immediately say, "that's too noisy" or "that's overdesigned, I think I can get something cheaper".
And thanks to theorems we got from the field of real analysis and statistics, we know that the sum of Gaussians is itself Gaussian. So rather than sticking two components together and measuring the noise, or running a computer simulation, I can simply square, sum, and square root.
Each of these theorems reduces the amount of work necessary -- whether by pen and paper or by computer -- by an enormous factor. But if you can describe in similar detail how the computer program would work, I'll acknowledge your greatness.
That's just a few parameters that you avoided having to tune, a few milliseconds of computer time. I'm not great, just questioning basic assumptions that have been handed down from a time before computers were around.
You keep on asserting that the task -- without domain knowledge of Shannon's theorem -- is solvable by computer. Can you describe how such a computer program would work? I'm unconvinced that the computer program would get a reasonable result before you run out of money paying for it.
The Shannon theorem is basically voided by compressive sensing. But all of this (shannon+compressive) was a waste of practical people's time anyway. It's justification for pencil pushers who don't want to do real work.
How about solving differential equations numerically. Such programs are used all over the place in mathematical modeling, with applications ranging from economics to electronics.
And you can't just "write a computer program that tries different parameters". You have to prove that your numerical method solves a certain class of equations first, otherwise your rocket will fly sideways, if at all.
The intelligent way to do that nowadays is to use automatic differentiation followed by optimization "plugins". And that has little to with proving theorems unless you consider all computer programs as proofs and all computer programmers as mathematicians.
I'm not sure what automatic differentiation has to do with solving diff. equations but...
I assume that all those automatic methods you mention are passed down from above in some sort of holy scriptures that we're supposed to blindly believe and use? Or maybe some "pencil pushing" mathematician came up with them first and _proved_ that they actually work?
If you're solving DE's by hand you were probably just in a class taught by members of the Mathematician-Teaching Complex (allusion to Military-Industrial Complex).
I'm curious - you seem to be saying that theorems are useless, and instead, all we ever need to do is write programs to twiddle the numbers and work things out that way.
So what would you need me to do to show that proving theorems is important? What would convince you? What is your criterion for a successful response?
What is the criterion you would give an astrologer? I'm not equating them with astrology, but simply pointing out the countless billions wasted on mathematics that narrows to the rarefied "breed" of pure mathematician earning a salary is as much of a waste as if billions were spent on buggy whip boys. Some things are just meant to fade into museums and history books.
There is no argument that fundamental research (including, but not limited to pure mathematics) is a high-risk investment with a small chance of a large pay-off. But, like most good, high-risk investments the potential pay-offs can be massive and, importantly, not always well-understood in advance.
Would we have GPS without relativity which in turn depends on non-Euclidean geometry, initially considered a curiosity? What about public key encryption, built upon number theory, proudly described as "useless" by the pure mathematician G.W. Hardy? There are so many examples like these.
Be careful of committing the fallacy in assuming that because a clear line of ideas can be traced back from the present day that finding that line was as easy as recounting it.
Out of interest, what is your position of the spending on sport, the arts, and other cultural endeavours?
People are tired of me criticizing maths on here so I'll stop with it.
My opinion on sports, arts etc. is the same. Publicly funded stuff is basically for the elite anyway, it's their way of cleverly siphoning tax-payer money (for there boring outdated interests which they partake in to show the illusion of sophistication). The masses pay to see their interests (and usually heavily taxed for it to boot).
I've had discussion with earnest creationists who point at the fossil record and say "Look - there's no way you can get from A to Z. There's a huge gap." When something is then found that's an intermediate form, they then say: "See! I told you! Now you have two gaps, and it's even worse!!"
So my point is this. I've read what you say, and I can see that you earnestly believe that pure mathematics and the theorems it produces are of no use. I'm asking what it would take to change your mind. What evidence would you need to convince you?
Science is grounded on falsifiable conjectures, and makes progress by testing those conjectures. If you are unwilling or unable to tell us what would be sufficient evidence, then I am unwilling to start chasing ghosts. It seems to me that trying to convince you would be like trying to nail fog to a wall.
Engineering is a critical discipline, and without it, things wouldn't get made. However, engineers rely on techniques that are known to work, and often that knowledge is based on deep theoretical work. Error-correcting codes, via which we get images back from Mars, and which allow reliable communications over cost-effective links are based on theoretical work. Yes, people played about and found the principles, but then they leveraged work done a century earlier to get to the limits. Having done so, they explored the theorems to see what axioms they were based on, and worked to see if those axioms could be circumvented.
It's the interplay between pure theory and pure experimentation that gives us enormous benefits, but it appears that you are willing, even eager, to dismiss fields of which you have no knowledge, purely because you can't see how they can possibly be useful.
It's Blub[0][1], all over again, but here we're not on a linear continuum, we're in a richly interconnected web of dependencies.
Another example. Fourier Transforms were first explored as a purely theoretical construct, showing that functions (with appropriate properties) form a vector space, and that vector space therefore has a basis, and that we should therefore be able to decompose functions into a representation relative to some basis. For decades this was a novelty, and then people started to use it for real. The theorems show the limitations, and then the engineers explore what can be physically achieved within those limits. Wavelets are now often used in contexts where the usual Fourier basis of trigonometric waves prove to be less useful, but the underlying theory is identical, and is still applied. And we know it will work, because of the theorems that were proven decades ago. The vector spaces are, by the way, infinite dimensional, and the work to understand the infinities was done as a part of pure math with no obvious applications.
Another example. We know that some problems are equivalent to others, and we know that the current best algorithms for these problems are exponential. We therefore know, for sure, and not just because of experiments, that some instances of some search spaces will be infeasible without a major break-through. As a specific example of that, multiple times people have claimed efficient algorithms for problems known to be NP complete. In some cases I've been able to prove that their algorithms, while possible useful in general, are definitely not polynomial. I can do that because of the theorems I have to hand.
But you won't care, and I can't make you care. I'm not writing this for you, because you appear not to be willing to change your mind, or consider that a field of which you appear to know very little might just be useful.
No, I'm writing this for people who read your comments and wonder. I'm writing this for people who are willing to entertain the idea that things of which they know little or nothing might be useful in ways they can't yet imagine.
You've given examples of old theorems, and also said they weren't actually useful at first. Later practical work lead to someone seeing use in them (but only the a vague way - limits - which could have been and probably found through experiment as well).
The example you gave of your proof doesn't say all that much, you won an argument (you said those algorithms were practically useful anyway).
I'm sincerely sorry if I come across as hostile. I'd just like to see mathematicians own up and change the corruption from the inside. A lot can come from it, you can change the world (you've taken up a lot it's resources, including some of the smartest people).
You are moving the goalposts again and again, which is why I asked you for the criteria necessary to change your viewpoint. It's clear that you won't, and this will be an endless and fruitless exchange. However, Duty Calls[0].
> You've given examples of old theorems, and also said
> they weren't actually useful at first.
We don't know what of today's work will become useful, or how. I can only give you hindsight evidence, because prediction is hard, especially of the future[1].
> Later practical work lead to someone seeing use in them
> (but only the a vague way - limits - which could have
> been and probably found through experiment as well).
It's limits that are especially hard to establish with experimentation. How do you know that you just haven't yet been clever enough? How do you know that this will always work no matter how far you go? Remember the Ariane 5 explosion? The maiden flight of the Ariane 5 rocket, Flight 501, was lost because engineers reused a unit that had worked flawlessly in every previous flight. The limits were not well understood, and experimentation was expensive. These are guiding principles - sometimes understanding the theory helps more than twiddling bits and seeing what happens. Sometimes twiddling bits is enough. Engineering and Theoretical Research should work together.
In my own work there are at least a dozen cases where knowing a theorem has provoked an exploration, that has then turned out to be useful. The investigation would most likely never have started, the techniques never suspected, without that initial theoretical knowledge. Again, engineering and pure math research in combination, but without the pure math to start with, some things might never be found. We can't know that, of course, but for those who have studied math, the connection is clear. For those who haven't, it's harder to see these things happen. It all seems so obvious in retrospect.
> The example you gave of your proof doesn't say all
> that much, you won an argument (you said those
> algorithms were practically useful anyway).
So, you don't get the idea then. I proved something was impossible given our current state of theoretical knowledge, and that if the claims were correct it was an outstanding breakthrough. He proved nothing, and had a system that worked sometimes, and he was unable to know when, and how, it might fail. I predicted - using pure math research - exactly how his system would fail. I could do something he couldn't, using his system that he created.
> I'm sincerely sorry if I come across as hostile.
Hmm. let's see some of the things you've said:
> ... rarefied "breed" of pure mathematician earning a
> salary is as much of a waste as if billions were spent
> on buggy whip boys.
... and ...
> ... it's the full employment act for pencil pushers.
... and ...
> A lot in common with fundamental religionists.
... and ...
> I never tried getting education in it in the first place.
> I had hunch it's useless and the older I get the more
> I think that was correct.
... and ...
> It's justification for pencil pushers who don't want
> to do real work.
Yes, that does seem hostile. In fact, it seems like you never bothered to study math, and now are trying to convince everyone that something you don't understand cannot possibly be useful.
> I'd just like to see mathematicians own up and change
> the corruption from the inside.
Hmm.
> ... you've taken up a lot it's (sic) resources, including
> some of the smartest people.
So some of the smartest people study math and claim that it is a good thing to be doing. You, on the other hand, claim that it isn't. Your position seems difficult. You claim that something you've not studied, and which is studied and commended by some of the world's smartest people (by your own claim) is, in fact, useless.
Again, I'll never convince you, but I thought your comments should not stand without reply. Perhaps you are in earnest and genuinely believe the things you say, but you are arguing from a position of willful ignorance, which makes it hard to take you seriously.
Which "parameters" are you optimizing? Where did the corresponding model come from?
When is a certain model even applicable? How do you implement the search procedure
efficiently? Is the result actually meaningful? Can you expect future predictions
to make sense or did you overfit to your training data?
To avoid any fluff, let's go with a concrete example: some of the best performing algorithms
for segmentation are based on spectral clustering (http://en.wikipedia.org/wiki/Spectral_clustering).
What does an eigenvector corresponding to the second-smallest eigenvalue of the
normalized graph laplacian have to do with random walks? How do you compute it?
I see that you use VW for machine learning. You should ask the lead developer (John Langford) what he thinks of the efficacy of mathematics for machine learning given most of the algorithms it uses were derived from theoretical considerations (e.g., SGD, LDA, reductions, etc.)
Would you describe him as a "pencil pushing fraud" too?
Yes, he's a pencil pushing fraud for the most part (he might even admit it in private). I don't know how old he is but assuming he's been professional for 10 years, his big contribution is the few hundred/thousand lines of VW (he's probably done other work, but let's assume this is a significant fraction). Where did the rest of his time go? If he just decided to bang VW out a decade ago he'd be at it's current state within a month of starting. VW is only useful because it's fast and it works (and that's not due to any theory). It's theoretical considerations are useful only for essentially drawing that 1 month out to years/decades.
I'll ask him whether he considers himself mostly a pencil pushing fraud when I see him in December. I'll also see what he thinks of the claim that VW's usefulness is "not due to any theory". I'll think you'll be surprised given he's written a blog post on this topic titled "Use of Learning Theory" http://hunch.net/?p=496 at his blog which, by the way, is called "Machine Learning (Theory)".
It's easy to dismiss what you don't understand but you should consider the possibility that it is significantly more difficult develop algorithms like those in VW and "bang out" implementations of them that are fast and correct.
First of all, "learning" is a made up word for parameter search, it's kind of a trick to fool funders to think you're doing cool stuff. Second, his entire blog post is about theory being useful only in a crude way (which basically means not useful) and he's outlined useful rules of thumb (that probaly come from experiment). Is time best spent on gathering data and running experiments to show practical usefulness of different algorithms or on pencil pushing? That isn't made clear.
Firstly, I agree that machine learning is effectively parameter search but the name an artefact of history and we both appear to understand what it means so I don't see how this adds to the argument.
Secondly, no, John didn't say learning theory is "useful only in a crude way", he said, "learning theory is most useful in it’s crudest forms". Big difference. And besides, he says right at the beginning of the post that he believes "learning theory has much more substantial applications [than generating papers]".
To be convincing, theory needs to be precise – if you are not careful about what you are talking about it is easy to believe things that are not true. However, what I think John is saying is that value of theory comes from afterwards abstracting away the precision and understanding the main message (i.e., the "crude form") of a theoretical result. In general, I don't think it is possible to get to a convincing "main message" or "crude form" without someone having grappled with the details.
No matter how many experiments you run, you will only ever show that an algorithm works well or not in a typically small finite number of cases. What theory does is look those cases and ask something like, "It seems that every time X is true of my problem, an algorithm with property Y works really well. I wonder if that is always true?" This type of question gets carefully formalised and then (hopefully) answered. The process of formalisation (i.e., defining things carefully) can yield new ways of thinking about things (e.g., over fitting and bias-variance trade-offs), and having an answer to the general question means that you can be assured that future uses of your algorithm will behave as expected.
You seem to have an unshakeable opinion that mathematics/theory/"pencil-pushing" is inherently a waste of time. That's a real shame. Why do you believe the pursuit of precision, insight, and proof are somehow inferior to running experiments? I find both to be valuable and the interplay between the two extremely rewarding.
'Proving Theorems' is a grand sounding statement, but the reality need not be so glamorous! A Theorem as a thing is an encapsulation of a granule of knowledge or, even less precisely, something learnt which is known to be true[1]. To prove a Theorem is to show that the knowledge it encapsulates is correct. Why is this good?
Most theory I have read[2] on AI, machine learning, and knowledge deals in depth with the concept of symbols and symbol manipulation. Some would claim (correctly!) that computers are in essence simply symbol manipulation machines. The thing is, these symbols are really just ways of encapsulating a piece of information. Often, symbols are described as being 'made-up' of other symbols; symbols are used to describe other symbols. As a simple and obvious example:
- bits (low level 'axiomatic' symbols) are organised into groups to create bytes (still low level, not much meaning here yet)
- bytes are associated with letters (in an isomorphic relation which imparts meaning[3])
- letters are organised into groups to form words (a significant amount of meaning has now appeared)
- words are strung together in sentences
- sentences are woven into paragraphs/chapters/themes/poems/stories/epics
If I was to claim that a poem or story has no meaning simply because I can craft it from simple bits I would be missing something very important about symbol manipulation.
When I take a collection of ideas, snippets of information and learning, and am able to label them with a name I can transcend my current level of understanding and deal with ideas which are invisible on the lower level. For a programmer, this hierarchy of symbols presents itself as a hierarchy of low, medium and high level languages. Writing a python program in assembly would be absurd!(although perhaps appropriate in some cases)
Theorems (and Axioms) are the symbols of mathematics, and proving Theorems is symbolic manipulation in one of its purest forms. In fact, proving Theorems is the main way in which new 'meaningful'[4] symbols are created.
So why is proving Theorems useful? In short, because it is how new knowledge is created! (in this context, mathematical knowledge)
Is mathematical knowledge useful? I think anyone living in this modern world of ours (and I think that phrase applies to most people in most of humanity's history) can see that at least some mathematics is useful. Many have here argued about the different merits of developing mathematical knowledge (ie proving Theorems) before an application is well known, so I won't labour that point. Instead, I think I'll direct the interested reader to G.H. Hardy's 'A Mathematician's Apology'[5] for a great essay on the many merits a joy of mathematics can bring the suitably inclined soul.
If knowledge is a good thing, and proving Theorems increases knowledge, than proving Theorems is a good thing. To say it is better than other ways of spending our efforts is hard, but I think we can definitely agree that it is useful.
[4] An idea again borrowed from Hofstadter. We can string any string of words together that we want, doesn't mean that they mean anything to us. Formal system such as mathematics give us a way to combine symbols to give us new meaningful symbols. My claim that it is the main way they are created is founded on wild speculation and heavily biased historical observation.
You might be interested in the up and coming fields of 3D video games, flight simulators, CG movies, machine learning, supply logistics, autonomous navivations, colonization of Mars, and your personal favorite algorithmic trading, all brought to you by "theorems" of linear algebra.
Great news, I finally wrote a sorting algorithm that worked for 10/10 of my test cases. Now, I can just plug that number into my unproven statisticle test, and found out how likely it is that my algorithm will work for all inputs.
It does not seem to run extra slow for any input, so it probably won't run slowly on any input in actual use. Besides, even if it did I would have no way of proving what types of inputs it failed on, or if another algorithm would do better.
The computer industry wasn't built on Turing completeness. It's an evolution of practical technology starting with the invention of textile machinery (but you can also kind of argue it started with abacuses or even lines drawn in sand).
You are right. Evolution has its mysterious ways. But maybe it's only mysterious to us because our math has not evolved yet to a point where complex systems can also be properly described by theorems.
But then what is the use of the maths. We already have the wealth the comes from the technology. The maths doesn't add much (instead siphons off resources).
You're arguing for eating our seed corn. Yesterday's maths drove today's technology. Today's maths will drive tomorrow's technology, unless you have your way.
edit: downvoters downvote instead of providing refutation. A lot in common with fundamental religionists.