Apparently no one here has been on locked down wifi where only ports 80 and 443(TCP) are available to the internet. Thats not that uncommon to see and is a completely legitimate reason to start this project. Theres nothing "wrong" with udptunnel or wireguard other than it doesn't use the transport protocols that the network that he is on allows.
I think you might be missing the point of UDP Tunnel: Tunnel UDP packets over a TCP connection
The previous commenter was pointing out that you don’t need your VPN to support TCP in order to tunnel it over TCP, since that is exactly what UDP Tunnel is designed to do.
Well yes, that would block this. But we've now flipped from "why not use X instead of Y" "because some networks block Y" to "X is still bad because my network would block it". Your network doesn't make it a bad choice for his/other networks!
There will no doubt be a way to get around your networks propensity to block traffic that looks encrypted, though we are getting very specific to that circumstance in order to do so. Perhaps using actual valid HTTP protocol on port 80 to send and receive data via POST requests (polling for receive if there is nothing to send) would be sufficient, though not efficient. If not then you could try hide the encrypted traffic inside what looks like plain text, maybe sending book passages and flipping between upper and lower case to represent 1 & 0 bits... Though of course any human looking at packets that are part of the stream are going to see that you are trying to hide something, though that would be a problem with all these techniques.
The software Shadowsocks (and its variants), developed by a Chinese hacker to circumvent the Chinese internet censorship, may be of some use here. It is able to tunnel all packets through a seemingly legitimate HTTP connection, which can be used to fool automated traffic snooping attempts.
> i've always wondered how companies get away with decrypting certain sites, i.e. Healthcare.
Often, through some kind of employee code of conduct; think along the lines of "I agree to refrain from using my work computer for personal business." or similar. Then, if something sensitive is decrypted, the employer has some legal cover.
In my experience, those aren't decrypted. I see company generated certs for most https, but not banks and healthcare sites. I'm guessing there is some sort of whitelist.
IP address matching: Watch raw IP layer, pass through TLS traffic to some IP range, this requires vigilance to ensure the IP range maps well to the set of sites you're OK not decrypting and doesn't include sites you want to decrypt.
SNI matching: During TLS ClientHello watch the SNI provided by the client, if it's on a whitelist, let the entire connection through. Clients aren't obliged to be honest and servers aren't required to even look at SNI, if they serve only a single site why check?
Certificate CN/ SAN matching: Stall during TLS ClientHello, wait for the responding ServerHello and Certificate from the server, and examine the certificate for a hostname, check if it's whitelisted. Otherwise, decrypt the connection.
That last technique is very popular, and real middlebox companies (e.g. Cisco) have argued that since it doesn't work in TLS 1.3 (the Certificate message is encrypted so they can't read it) this is a significant security-relevant change. In the rant below I will show how to bypass this "security" check mostly because it is useless, companies selling it have been selling snake oil. Either they're too stupid to know it doesn't work or they assume their customers are too stupid, either way why would you deal with a "security" company like that?
1. Certificate contains only public documents, one or more X.509 certificates. Bad guys can get Google's certificate, or that of a bank, STI clinic or government regulator by simply connecting to those outfits and gathering the certificate.
2. The legitimate _client_ (e.g. your web browser) will need proof that the web site knows the Private Key corresponding to the public key in the certificate, but that proof is encrypted, either implicitly (RSA key exchange sends the session keys encrypted with the public key) or explicitly (modern cipher suites present an encrypted public key proof of ownership for the session) and so a middlebox can't see it except by interposing and decrypting everything.
3. Thus a "rogue" client can just connect to a "rogue" TLS endpoint and send SNI for www.google.com [pick any whitelisted hostname here], the server sends back a Certificate message for www.google.com and then the client just ignores the public key inside that certificate and presses on using a defined public key for the rogue server.
The middlebox cannot detect this happening, regardless of whether TLS 1.3 or earlier are used, it works the same and the "security" features in the middlebox don't prevent it.
Again, middlebox vendors have been _told_ about this, they either aren't bright enough to understand it (so you should not trust them) or they are lying to customers in the hope the customers don't understand it (so you should not trust them) and either way the result is the same.
> The middlebox cannot detect this happening, regardless of whether TLS 1.3 or earlier are used, it works the same and the "security" features in the middlebox don't prevent it.
Can't the middlebox just connect to the same endpoint, verify the certificate itself by checking that it is signed by a proper CA, and then, if it is not, drop the connection?
"Checking that it is signed by a proper CA" isn't relevant, the certificate presented is the real one, like I said they're public documents, there's no reason to use fakes.
But the middlebox can, indeed, verify that when it connects to this same (IP, port) pair that endpoint is able to prove it's the real thing. I have never seen one that does this, and it won't prove anything about other connections, since our "rogue" TLS server can proxy everything to the real whitelisted server and that will pass, but yes you could do that.
You can invent arbitrarily sophisticated schemes and counter-schemes in this space. The 32 bytes of ClientHello random in particular mean you can never tell whether the client is signalling to the server or not if you even sometimes choose not to interpose.
you could have an outer TLS tunnel that accepts the MITM certificate, then do HTTP, then a websocket upgrade then your VPN-over-websocket. Or fake pipelined HTTP that exchanges data bidirectionally.
But really, you don't have internet access at that point, so you shouldn't expect internet software to work.
It allows you to tunnel UDP over a fake, non-lossless TCP connection. That is, it wraps packets in TCP headers to make them look like TCP to firewalls, but it doesn't actually implement TCP; instead, each "TCP" packet corresponds to one UDP packet, and it makes no attempt to resend dropped packets. This way you avoid the problems with TCP-over-TCP.
If you're going to pretend to be TCP, you probably need to look like TCP to middleboxes. When we investigated this a few years back, we found that on port 80, only 85% of the client locations we tested would pass TCP if there were holes in the sequence space. In fact this heavily influenced the design of Multipath TCP (MPTCP). The paper is here, relevant section is 4.3:
https://conferences.sigcomm.org/imc/2011/docs/p181.pdf
In FakeTCP header mode,udp2raw simulates 3-way handshake while establishing a connection,simulates seq and ack_seq while data transferring.
--seq-mode <number> seq increase mode for faketcp:
0:static header,do not increase seq and ack_seq
1:increase seq for every packet,simply ack last seq
2:increase seq randomly, about every 3 packets,simply ack last seq
3:simulate an almost real seq/ack procedure(default)
4:similiar to 3,but do not consider TCP Option Window_Scale,
maybe useful when firewall doesnt support TCP Option
--seq-mode
The FakeTCP mode does not behave 100% like a real tcp connection. ISPs may be able to distinguish the simulated tcp traffic from the real TCP traffic (though it's costly). seq-mode can help you change the seq increase behavior slightly. If you experience connection problems, try to change the value.
There are so many issues with udp2raw, including: poor support, weird configurations, poor code quality, strange DNS solution and loss spikes. Simply put, it's too unreliable at the moment.
DNS resolution doesn't seem to work on Windows with OpenVPN (ICMP, TCP, UDP work just fine though), on a plain OpenVPN connection everything is working as expected. Maybe I'm wrong and there's a setting, which probably solves the issue, but I've already tried various configurations, including buffer reductions, OpenVPN tweaks, etc. Loss spikes are from small packet size limitation (1200 is a recommended value), programs may try to send big packets, leading to packet fragmentation.
Theres no more right or wrong with vpn's as you can still work out what type of traffic is passing down the wire by the volume of encrypted traffic. YouTube traffic surges which gives you an idea they are watching YouTube, large file downloads like Linux distro's will be typically a continuous consistent level of traffic, webpages are short and brief. Googles services like DNS, api's etc are also joined up in realtime but DYOR. The NSA's Ghidra is a useful tool to start with and is superior to more expensive reverse engineering tools on offer.
Could be added to the app, it's just a matter of standardizing it. You probably wouldn't want to run a standalone userspace UDP tunnel process anyway.
Though it would probably be wise at that point to make it congestion control aware, since TCP over TCP has some issues which are not considered solved.
The technology mentioned by the commentator - UDP Tunnel - is designed to tunnel UDP packets through TCP. I was agreeing with the commentator that it would be a valid solution. I was also pointing out that the days of blocking UDP by default is numbered with the incoming HTTPv3 changes.
It's cool but the author's motivation doesn't make sense.
OpenVPN was too hard to setup so they decided to write their own VPN from scratch? It's cool as an academic endeavor but by actually using it, they not only tossed out all the years of security work and the audits OpenVPN has gone through but also spent a ton of time creating something that they now will have to personally maintain.
I've tried 3 separate times now to set up the necessary pieces. I get all the way to the end and it just... doesn't work. And I'm just not a good enough network engineer to sniff out why.
A lot of people in the low-end VPS community use this [1] -- and I've used it ever since. The installation script just works. Adding a new user? Run the script. I don't think we're alone in the struggle.
Thanks for the resource! It's always a pet project to try and set it up, so I never researched any install helpers very heavily. I'll definitely try this next.
Openvpn takes some work to setup, but it works over proxies very very well. I can vpn into my cloud vm over the corporate squid based proxy (barracuda and plain squid) from my linux desktop, over port 443. But yes, things could be much more simple to setup for the average user, it shouldnt take hours to lean configs and setup.
I automatically cringe and walk away when I see tcp over tcp. I’ve been bitten by it too many times. Someone correct me if I’m wrong, but it’s fundamentally incorrect and is pretty much guaranteed to devolve into pathological cases.
Here is what the author of DSVPN says in the readme about that: «TCP-over-TCP is not as bad as some documents describe. It works surprisingly well in practice, especially with modern congestion control algorithms (BBR). For traditional algorithms that rely on packet loss, DSVPN couples the inner and outer congestion controllers by lowering TCP_NOTSENT_LOWAT and dropping packets when congestion is detected at the outer layer.»
This is wrong. BBR still does retransmission if packets do not arrive on time, and any retransmission in the inner TCP stack automatically becomes badput that wastes the outer TCP stack which is likely at the same time also doing retransmission for packet losses at lower layer.
TCP_NOTSENT_LOWAT is not a solution to this either. It is a solution for HTTP/2 to be more aware of the congestion state and prioritize traffic properly, which itself does not do retransmission. It makes the buffer smaller so congestion is reported to upper layer earlier but still much later than when the actual congestion happens, distorting the upper/inner TCP congestion control. Also, a 128KiB value is hardcoded here for this knob, effectively rendering it only useful for a bandwidth delay product of 5Mbps * 200ms RTT.
I’ve noticed a pattern where crypto people don’t seem to understand the edge cases of tcp congestion control, so I agree that this workaround is suspicious. Of course, it’s better than no VPN if your UDP is blocked. I like sshuttle’s way better (but I’m biased).
However, it’s not broken in the exact way you’re thinking. TCP_NOTSENT_LOWAT is diffent from TCP_LOWAT. The latter would imply a hardcoded bandwidth-delay product. The one they’re using is a margin on top of the bandwidth-delay product, which mostly just depends on a fast enough CPU. They’re using a surprisingly high value for it though.
I often use a VPN over TCP port 443 from a network that blocks all UDP traffic. It works quite well in practice.
I use a wired connection, all routers and switches on our side are oversized professional units and the uplink is a beefy fiber link, so my packet loss rate is basically zero. Your results may vary.
Yeah, if the environment is tightly controlled with known configurations and not too much congestion, it’ll work alright. But try with a spotty connection, differing MTUs, irregular latencies, etc.
I am much more intrigued by the attempts at breaking out of networks by tunneling over DNS (where that flavor of UDP traffic is let through) and ICMP (when not blocked).
In the messier parts of the real world where UDP is treated badly, it often works better than UDP VPNs. Unfortunate but there we go. The main reason for using a TCP port 443 VPN is if your network administrator blocks everything else.
This. I get that WireGuard maintainers don't want to deal with users complaining about TCP performance problems, but the result is users have to use a different VPN client which may be less secure. Totally within their rights, but annoying.
> Uses only modern cryptography, with formally verified implementations.
That's a bit light on details. Does it have hardware acceleration? Replay attack protection? Perfect forward secrecy? What are the underlying algorithms? Implementation verified by whom?
> Small (~25 KB), with an equally small and readable code base. No external dependencies.
This looks cool, however I don't like the fact that it doesn't use a trusted crypto library such as libsodium. It is likely to get less review and if weaknesses are detected in the algorithms, it is less likely to be improved.
> This looks cool, however I don't like the fact that it doesn't use a trusted crypto library such as libsodium. It is likely to get less review and if weaknesses are detected in the algorithms, it is less likely to be improved.
That's somewhat amusing given that the author of dsvpn is the author of libsodium.
> That's a bit light on details. Does it have hardware acceleration? Replay attack protection? Perfect forward secrecy? What are the underlying algorithms? Implementation verified by whom?
The Charm library appears to implement the Xoodoo cipher using SIMD instructions (SSE on x86; NEON on ARM). Implemented by the same author as the VPN. There appears to be a key exchange handshake to prevent replay attacks. PFS: doesn't look like it.
Key rotation can in many cases be ultra annoying and you dismiss PFS far too easily. After Snowden, we know for a fact nation states are conducting massive data capture operations. In some cases, visiting specific trigger sites is enough to tag you. PFS is very useful and there is nothing wrong with admitting that DSVPN can be unsuitable for more people than just axe murderers.
While I don't think setting up openvpn is "difficult", in the sense that it's a hard problem to solve, I would definitely not go as far as to say that it is "dead easy".
Setting up openvpn is definitely involved[0]. And I think being concerned that you've configured something incorrectly is a real issue, especially when it comes to security.
But you would trust this toy project in terms of security? The point of something like OpenVPN is that all security cases and bugs are worked out already, and there is tons of information for all use cases, and everything is already polished.
Sure you might need to learn some new configuration options, but you won't just use them in this project, they will serve you for the rest of your life for all possible VPN usage cases.
> all security cases and bugs are worked out already
This is likely not the case. While it's true that there hasn't been a severe/highly exploitable published vulnerability in OpenVPN for the last decade or so, that doesn't mean that there aren't vulnerabilities.
I've found the server to be unreliable, when I used OpenVPN, I had to restart it regularly in order to make it possible to connect again (it would begin to connect, but then just stall open).
Plus the project lacks in features, you can't compare it to OpenVPN, it basically has done one sinlge usecase, for one OS, etc.
So basically the author thought it was simple enough for him to write a new software, but not simple enough to setup OpenVPN? (which anyone can do, especially in the shared secret case?) This project smells.
I would not recommend anyone to use this project, it seems like something the author have simply enjoyed writing, not something that is created for using.
Also how is "using ports 80 and 443" is a new "feature" when every other VPN can do exactly that?
> Plus the project lacks in features, you can't compare it to OpenVPN, it basically has done one sinlge usecase, for one OS, etc.
Looks like mainly just Windows is missing? I didn't check it carefully, but seems to be fine on macOS and Linux. Not sure if the server would run on macOS, but I don't think that would be a common use case.
Can somebody well versed explain what the difference between TCP and UDP in this case? I obviously know what these are, I just don't understand why it's such a debatable choice applied to VPNs.
01CGAT’s link sums it up as: TCP is not designed to be stacked and doing so results in the exponentially increasing retry timeout feature, used for reliability optimization of the protocol, conflicting to provoke excessive retransmission attempts by the upper layer TCP.
The detailed explanation is in the linked article: “Why TCP over TCP is a bad idea”[0]. It was broken for me so I dug up an archive.org copy.
The upper layer transmission control and and retransmission attempts are completely unnecessary as transmission is already guaranteed by the lower layer TCP. The upper layer TCP, unaware of TCP underneath and having an increasing timeout on acknowledgment failure, can begin to queue up more retransmission than the lower layer can process increasing congestion and inducing a meltdown effect.
What's the significance of emphasizing port 80 and 443? You can assign basically any ports to any application. If some firewall blocks all traffic but 443, you can configure the service yourself to listen on 443.
443 is a nice default since it's the most likely port to be both unblocked and left alone by middleboxes. But I agree that it's not exactly a unique feature. "DSVPN - Dead Simple VPN over TCP" would have been a better headline
You, I, and many of the readers here know that. The rest of the world does not. I've been asked before why I had something that needed to be encrypted running over port 80 and thus, unencrypted. It was a bit of a challenge to explain that we actually used SSL on port 80 just fine. (Somewhat similar reasons this VPN uses these ports.)
Port 443 is special because it's dedicated to common encrypted web traffic (HTTPS) - so the port is always available are rarely tampered with. If the traffic is inspected it will resemble HTTPS traffic anyway.
It's not detection proof - but it avoids a TON of common problems with VPNS.
But does it look like SSL traffic? That's the problem with OpenVPN, it's quite easy detect. For restrictive environments I much prefer Ocserv (uses OpenConnect/AnyConnect protocol) or mirosoft's sstp protocol.
I was back in Dubai recently and sadly WireGuard didn't work, so I had to use OpenConnect, which while doesn't have the connectionless-like behaviour of WireGuard atleast worked.
https://en.wikipedia.org/wiki/SoftEther_VPN
> Firewalls performing deep packet inspection are unable to detect SoftEther's VPN transport packets as a VPN tunnel because HTTPS is used to camouflage the connection.
Exactly. SoftEther is also one of the only linux servers I know of that can do Microsoft's SSTP protocol, which is convenient since it's built in to Windows and looks like HTTPS. I'm not sure SoftEther's own SSL protocol is widely used, or it's more used for site-to-site.
A simple noob question: in this context where I want to access a private remote machine, what are the advantages of a VPN (let's say over TCP, I don't know if it matters?) vs. a simple ssh tunnel?
I think that really depends on whether you need all traffic routed through the tunnel/vpn and what else you were trying to accomplish. For just basic system access there isn’t much advantage to vpn, but it would really depend on what you’re trying to accomplish (keep people out? Just secure your connection to another machine? Etc.).
Its not super hard if you don't give a care about your local stack space. Just over-allocate a bunch of static arrays on your stack.
Stack variables in C99 (and most C++ compilers supporting C99) can even have a dynamic, run-time determined length. It compiles down to a "add esp, (blah)" command, if anyone is curious, so its super efficient.
I prefer stack-based allocations in my code. I only use heap-variables if I'm passing data between threads. If you need to pass data "up" your functions, then its a parameter (that is passed in on the stack). I would argue that stack-based variables are preferred in C++ due to RAII.
Seems like it's a personal project, and the author is absolutely ruling out supporting anything he doesn't like, including anything peripherally related to systemd, which seems a bit childish to me; but heh, it's not my work.
He's a developer who doesn't feel like doing something. It's childish to expect him to do what YOU think he should be doing, isn't it? Why should he? It's his project - he doesn't want to do it, doesn't have to, doesn't owe anyone anything? I don't get this at all. Shifting change in society's attitudes, maybe? It's not enough to take someone's work for free; now there's an obligation that he answers to a higher power - anyone with an internet connection?
> He's a developer who doesn't feel like doing something. It's childish to expect him to do what YOU think he should be doing, isn't it?
If we're being asked to care about this developer's project, then I don't think it is unfair at all to criticize. Let me see if I can explain where I'm coming from with a short play representing many, many real dialogs I've had and read in my years of computing:
Evangelist: "hey, you should use this thing I made!"
Me: "No, it doesn't seem to do what I want."
Evangelist: "You don't really need that anyway..."
Me: "No, I really do."
<20 minutes of pointlessness later>
Evangelist: "It's wrong of you to criticize all this hard work I've given you for free!"
Me: "I didn't ask for it!"
So maybe too many years of dealing with open source software evangelists has lead me to assume that anyone posting a project like this is doing so in this vein.
It's the particular jab against systemd I'm calling childish. It's no skin off his back to include a community-provided systemd .service file, for example; and he's precluding that entirely, presumably for some petty disdain.
and every time it is posted, the "TCP over TCP is a bad idea" link is brought up (once by myself). Each time, the poster/supporters downplay TCP Meltdown. Solutions for VPN over TCP 80/443 already exist. It makes a great pet project, don't get me wrong, but it's not going to get any interest from industry professionals.
I see this repeated in a lot of places about WireGuard but is there anything wrong with UDPTunnel (http://www.cs.columbia.edu/~lennox/udptunnel/)?
Why would one prefer this instead of WireGuard + UDPTunnel?