I have some code that opens a socket, sends a message, receives a response, and closes the socket. I need to add a timeout to shut down the socket connection if it takes longer than a specified duration. This code is written in Java, but this is a design question. Sockets support a read timeout. To see how it works, read about setSoTimeout() in the Javadoc for java.net.Socket, or try man 2 getsockopt on your local Unix machine. All socket implementations provide this same timeout functionality. Anyway, the way the read timeout works is that you can set how long the socket will wait without receiving data before the connection is automatically terminated. If I open a socket connection with a 5000 millisecond read timeout, and the other end of the connection sends me no data before 5 seconds elapses, the connection will be dropped. In Java, this results in a SocketTimeoutException being thrown. This timeout is somewhat useful, but only in limited circumstances because the timer only begins once the connection is established (and is automatically restarted as soon as the socket stops receiving data from the other end of the connection). If a network issue prevents the socket connection from completing but doesn’t block the attempt to connect outright, the timeout counter never starts, and the connection just hangs indefinitely. You see this sort of thing happen sometimes when you try to connect to a server via SSH. If there’s a network issue, the connection will hang without being dropped. On the other hand, if you connect and don’t enter your password, the server will drop your connection after awhile due to inactivity. That’s the socket’s read timeout kicking in. Anyone know the best approach for timing out when you attempt to connect and are unable to do so? I haven’t found any solutions in my research, although I do confess that I have not looked at any of the Stevens books yet.