People do litter, and they aren't many community driven initiatives to clean up, but I don't think litterers are the source of the garbage problem.
Many municipalities in Rome dispose of garbage through large bins placed on the street at the bottom of apartment buildings. Especially during the winter, these bins are not regularly emptied, causing piles of waste around them and litter being carried around by the wind.
The same happens with parks' garbage bins, with the added challenge that you probably can't pick up the park's garbage and bring it to the municipal bins... because they're already overflowing.
I do some algorithmic trading and automated investment in the crypto space. I trade intra-exchange and do arbitrage between pairs (tri-arb). I also occasionally rebalance into an index of coins I like. Note that I do this at a fairly small scale, so not sure if my answer really matters much.
Although there are a lot of platforms for trading crypto, when it comes to arb, I found that you need to have control over a lot of things, most importantly:
- which data center you host your code in
- the way you get data from the exchange - not just prices and order books, but also current balances and orders
- the way you push orders to the exchange
With "the way" I mean mostly which APIs you use and how, the last few milliseconds of optimization for me were usually gained by making multiple simultaneous connections to the exchange and trying to figure out the fastest one.
For my rebalanced index-like portfolio I made a little script initially and then turned it into a side project, plug: https://nazcabot.io
Don't the fees negate the profitability of this strategy? How often do you see the spread between the rates achieving profitability when factoring in taker fees?
Trades are only triggered when profit is > fees. Depending on the market there can be fairly large opportunities popping up (on the thousands) a few times an hour. In a triangle, you only need 1 of the 3 pairs to have an inefficiency in the book to trigger a trade.
During the december craze: very profitable. Now: better than not trading.
If I were american and had to pay higher taxes on short term gains, it would probably not be worth it.
Over the past year I've decided to do something with my savings, rather than have it sit in a Dutch bank, getting far less than inflation in the country I live in now.
I've sprinkled the majority in a selection of Vanguard's ETFs 80/20 stock/bonds, heavy on tech stocks and the rest in a 70/20/10 split on large-cap, mid-cap, small-cap.
A little (more than) play money is in crypto. I've been successfully algo trading crypto for fun, largely through triangular arbitrage, and investing in some of the projects that I think will have an impact. The "investment" part is in a basket of 20 or so coins weighted by what I expect their chances are to succeed in their sector within the next 2 years, rebalanced monthly. I built the strategy with a tool I've been working on with friends in our spare time: https://nazcabot.io
Very interesting to see this. I've been running a similar infrastructure rendering sometimes up to 500k pages a day, although often without images. I'm also running on Digital Ocean, but using nightmare.js (https://github.com/segmentio/nightmare), which runs on top of Electron, which in turns runs on top of Chromium.
The CPU and RAM patterns I see are different, with fixed CPU usage at near max and memory oscillating between 65% and 80%. I believe this is due to the different usage pattern, I basically always have at least 20/30 jobs running concurrently on each machine, and they're usually fairly long (up to 10 minutes or so).
Contrary to what you mention, I've never had an issue with pages crashing and bringing the whole browser down. Maybe it has happened, but it's definitely negligible compared to the benefits I get by running say 5 pages in parallel. For some tasks I've also had some luck overriding the cross-content policy and using dirty ol' iframes to render multiple webpages in the same session.
I've considered migrating to puppeteer, so it's encouraging to see large scale project sharing their experience with it.
Same experience here - we run a docker instance of Chrome using tabs for multiple pages rather than multiple browsers, and they regularly run for days without issues. Of course their RAM usage gradually expands but it is easy enough to systematically stop & start the container, thanks to some built in error and fault handling to retry any requests which failed.
I can’t say I’ve seen one tab bring down the entire browser, but I’m sure thats feasible, but thanks to docker and the fault handling above, it’d restart the instance and be up within seconds.
This is what I've initially noticed as well (app will run for several days and eventually the account will cancel due to unavailability ... you'll get paged).
The screenshot I showed in that post is an instance that's been running for _months_ under high load. I can't stress enough that using tabs/pages will always result in frequent restarts which can be really tricky if there's other sessions that need to gracefully finish.
It depends, I guess. If what you're talking about is a disagreement between two parties about the object of the contract or its conditions, I don't see why you couldn't resolve it the same way you would with a paper contract. I don't know if any country's definition of contract is broad enough to include smart contracts, but you could possibly bring this to court, perhaps?
If the two parties can agree on a new stipulation of the (smart) contract, a new contract can be issued.
As an aside, this is painful, as a smart contract's address will usually be linked to some user facing application or just your own wallet.
A way to facilitate this is through the use of a proxy contract, which is a contract that exists only to forward instructions to a secondary contract (the main one). The main contract can then do whatever is meant to do.
Now if there is a disagreement or a bug, after an agreement is reached, you can deploy your new contract, change the address on the proxy contract and you're done.
Hi, my name is Ale. I'm a passionate developer coding since I was a kid. Although I have extensive experience developing full-stack for the web, I'm looking for a remote position that will let me work with data and large distributed pipelines: machine learning engineer, data engineer.
I have experience shipping real-time NLP systems for information extraction and classification. I really loved working on ML projects and want that to be my everyday jazz.
Note: I'm a EU citizen and require a work VISA although I did my undergrad at uWaterloo
I'm Alessandro, a full-stack dev currently working in Amsterdam and looking to relocate permanently to Canada in October.
I'm looking to join a small, (semi)generalist team; ideally I'd be sharing frontend / backend expertise while also dedicating time on working with machine learning, NLP, AI, or other non-web problems.
Email: two.is.literally.more.than.one [at] gmail com
Attention: this is a joint application.
We are Alessandro and Zak, we met during our [Computer Science] undergrads at the University of Waterloo. We’ve worked (projects, hackathons, etc.), lived and travelled together and we don’t want this to end. We both intend on moving to Toronto in October and would like to work on the same team. Alessandro currently lives in Amsterdam, NL, while Zak is in Waterloo, ON.
Many municipalities in Rome dispose of garbage through large bins placed on the street at the bottom of apartment buildings. Especially during the winter, these bins are not regularly emptied, causing piles of waste around them and litter being carried around by the wind.
The same happens with parks' garbage bins, with the added challenge that you probably can't pick up the park's garbage and bring it to the municipal bins... because they're already overflowing.