Subject: Re: TCP send buffer free space-Conclusion
To: None <tech-net@netbsd.org>
From: Dave Gantose <gantose@grc.nasa.gov>
List: tech-net
Date: 07/18/2001 11:36:18
Just wanted to wrap up (for now) this discussion that I originally
started. Thanks to everyone who offered ideas and comments, you all have
been very helpful. Below is a brief summary of the thread.

SUMMARY:
I have two data streams being written to a single socket connection:
"Important" data (from a data generator), and "Fast" data (read from a
file). I wanted a way to prevent Fast data from filling up my TCP Send
Buffer and preventing Important data from getting sent. My thought was to
discern the amount of free space in the TCP Send Buffer before writing
each Fast data record. If there weren't at least X bytes free, I would
delay the write until later. ("Important" data would always get written.)
But I didn't know how to find the TCP Send Buffer free space. [Note, I'm
speaking from an application level here.]

PROPOSED SOLUTIONS:
- Use two different sockets (or at least two connections), one for each
type of data. This was the favorite, but the single socket connection has
been mandated upon me, so this is not an option.

- Have the client send (application-level) acknowledgements to help me
regulate the flow of playback data. Unfortunately, I have no control
(neither physically, nor advise-and-consent) over the client. They do what
they do, and I am at their mercy (and they don't do acknowledgements).

- Use the type-of-service field in the IP header to discriminate between
the types of data. But I don't think the system is set up for this. (That
is, I doubt that anyone down the line--my client, for example--is paying
attention to this field.)

- Do some "kernel grovelling" to find out the Send Buffer free space. I
don't want to go this deep to get the information, because I'm pretty sure
I'd somehow screw things up and cause myself myriad other problems.

- Write a new socket option that would allow the kernel to return the free
space value to an application. This is interesting, but I don't think a
new kernel is an option at this time. I will keep it in mind though, at
least as an exercise toward improving my understanding and skills (when
there is time).

CURRENT ACTION:
Here's what I am going to try for now: First, I have changed my
application so that, rather than being read directly out of a file and
dumped into the TCP Send Buffer, "Fast" data comes *into* the app through
its own socket, which is blocking. (The "Important" data already comes in
this way, through a different socket which is not blocking.) I will
monitor all my incoming socket connections (there are others) and service
them in priority order--if an Important connection has data, it gets
serviced, then all the Important connections are checked again. Only when
no "Important" messages are available will a record from the "Fast"
connection be serviced--then the Important connections will be checked yet
again.

A further refinement I may try is that if a record being written to the
outgoing TCP Send Buffer gets EWOULDBLOCK, I will suspend servicing of the
Fast data altogether for some period, so that the client can catch up and
relieve the pressure.

I know this solution is not optimal (and I'm not even sure it *is* a
solution until I can devise a realistic test) but I am hopeful. And I am
trying to extract, from someone, statistics on the network and the
client's processing characteristics. Maybe that will lead to some new
ideas.

That's all for now. The saga continues. Thanks again.

-- 
Dave Gantose 
Zin Technologies, Inc.
NASA John Glenn Research Center     phone: (216)977-0392 
3000 Aerospace Pkwy.            work stuff: gantose-work@bigfoot.com 
Brook Park, OH  44142          other stuff: gantose@bigfoot.com 
=====>> PGP Public Key available <<=====