Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if FreeBSD can saturate 1Gbit/s with the TUN wireguard driver. Linux's native driver is likely faster.


Netgate[1] is sponsoring the development of an in-kernel implementation of wireguard for FreeBSD: https://forum.netgate.com/post/891869

[1]: Netgate is the company behind pfsense, a router/firewall distro of FreeBSD


Oh this is awesome!

A kernel space wireguard implementation and something like beehyve are the last 2 things I need to be able to start using fbsd a lot more.


Any idea whats the target date for it in the tree ?


when it's ready, of course.

It just started passing packets (ping) last week. It would have been at this point weeks ago, had Jason not baked his e-mail address into the handshake protocol. (Harumph.)


Matt changing random algorithm parameters he didn't understand is kind of on him, sorry. I'm glad of the work he's doing, and your funding of FreeBSD native wireguard work, but just changing random cryptographic parameters before had had packets passing was an exercise in foot-shooting.


It was certainly naive of him. However, can't deny that baking the authors e-mail & domain name in to the protocol is pretty narcissistic.


Conrad - although your observation is correct, this dig is a bod look when you've essentially never set foot outside of your fairly limited technical sandbox.


Let's find out with a sample size of 1:

Server is an HP Microserver with an Intel Xeon E3-1265L V2 @ 2.50GHz running FreeBSD 12.1. Client is a custom build with an Intel Core i7-4790K @ 4.00GHz running NixOS 20.03.

  $ ip route show
  default via 192.168.0.1 dev eno1 proto dhcp src 192.168.0.4 metric 203 
  192.168.0.0/24 dev eno1 proto dhcp scope link src 192.168.0.4 metric 203 
  192.168.1.0/24 dev wg0 scope link 
  $ iperf3 -c 192.168.0.2 # no vpn
  Connecting to host 192.168.0.2, port 5201
  [  5] local 192.168.0.4 port 37382 connected to 192.168.0.2 port 5201
  [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
  [  5]   0.00-1.00   sec   115 MBytes   961 Mbits/sec    0    571 KBytes       
  [  5]   1.00-2.00   sec   111 MBytes   929 Mbits/sec    0    571 KBytes       
  [  5]   2.00-3.00   sec   112 MBytes   939 Mbits/sec    0    571 KBytes       
  [  5]   3.00-4.00   sec   111 MBytes   929 Mbits/sec    0    571 KBytes       
  [  5]   4.00-5.00   sec   112 MBytes   938 Mbits/sec    0    571 KBytes       
  [  5]   5.00-6.00   sec   111 MBytes   929 Mbits/sec    0    571 KBytes       
  [  5]   6.00-7.00   sec   112 MBytes   938 Mbits/sec    0    571 KBytes       
  [  5]   7.00-8.00   sec   112 MBytes   938 Mbits/sec    0    571 KBytes       
  [  5]   8.00-9.00   sec   111 MBytes   929 Mbits/sec    0    571 KBytes       
  [  5]   9.00-10.00  sec   112 MBytes   938 Mbits/sec    0    571 KBytes       
  - - - - - - - - - - - - - - - - - - - - - - - - -
  [ ID] Interval           Transfer     Bitrate         Retr
  [  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec    0             sender
  [  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver

  iperf Done.

  $ iperf3 -c 192.168.1.1 # vpn
  Connecting to host 192.168.1.1, port 5201
  [  5] local 192.168.1.5 port 60358 connected to 192.168.1.1 port 5201
  [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
  [  5]   0.00-1.00   sec   108 MBytes   905 Mbits/sec    2    274 KBytes       
  [  5]   1.00-2.00   sec   106 MBytes   890 Mbits/sec    0    274 KBytes       
  [  5]   2.00-3.00   sec   107 MBytes   895 Mbits/sec    0    274 KBytes       
  [  5]   3.00-4.00   sec   107 MBytes   895 Mbits/sec    0    289 KBytes       
  [  5]   4.00-5.00   sec   107 MBytes   895 Mbits/sec    0    289 KBytes       
  [  5]   5.00-6.00   sec   107 MBytes   896 Mbits/sec    0    290 KBytes       
  [  5]   6.00-7.00   sec   104 MBytes   874 Mbits/sec    0    290 KBytes       
  [  5]   7.00-8.00   sec   106 MBytes   885 Mbits/sec    0    290 KBytes       
  [  5]   8.00-9.00   sec   105 MBytes   885 Mbits/sec    0    290 KBytes       
  [  5]   9.00-10.00  sec   107 MBytes   896 Mbits/sec    0    290 KBytes       
  - - - - - - - - - - - - - - - - - - - - - - - - -
  [ ID] Interval           Transfer     Bitrate         Retr
  [  5]   0.00-10.00  sec  1.04 GBytes   892 Mbits/sec    2             sender
  [  5]   0.00-10.00  sec  1.04 GBytes   891 Mbits/sec                  receiver

  iperf Done.


I would assume you're testing that on the stock kernel settings that aren't really prepared for the highest network throughtput. There's a lot that can be done in the kernel sysctl's tuning for saturating the NIC and I'd expect you to see a bit better results when doing so.

This is a very nice starting point for those interested: https://calomel.org/freebsd_network_tuning.html


I am running stock kernel settings.

I would naively expect that the default kernel settings for both Linux and FreeBSD would allow me to saturate a 1Gbit link in a LAN.

Anyway, this looks like one of those things I could go down the rabbit hole of tuning (so I'm not just copy-pasting swathes of configuration without understanding it), but this was just a quick demo which shows that: "basically, the userspace implementation isn't too slow".


At least in case of FreeBSD, the network saturation isn't an active goal of default kernel settings, hence the link I've pasted. It's especially nice, as it explains a lot of things it proposes so that the blind copy&paste wouldn't be so blind. It's really a good read.

And I do get a point of your test and I agree with the anecdotal conclusion :)


Some quite knowledgeable people in the field of BSD networking, including Henning Brauer, maintainer of OpenBSD's PF, have little love for instruction given on site you are linking to:

https://marc.info/?l=openbsd-misc&m=130105013025396&w=2


Taking settings for FreeBSD and blindly applying them to OpenBSD isn't a great idea, yeah.

Running the defaults is a good place to start, but if you don't get the results you're seeking, the linked articles show a lot of settings that are worth looking at.

There are a lot of settings that are reasonable to tune for specific uses, which is why they're configurable. Knowing which ones to poke at first is a good thing.


No... calomel is a bunch of bad recommendations for FreeBSD.


Judging from the "no vpn" set of numbers, the stock kernel is admirably prepared for the highest network throughput.


It can saturate 1Gbps with the TUN driver, sure. 10Gb is harder with TUN. Linux's native driver is lower overhead, although as siblings point out, there is work in progress on a native FreeBSD kernel driver.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: