Capturing the Stanford Colloquium Material

I suspect that some or most of the following is based on false assumptions. I would be glad to rework it upon new information.

I suspect that we must share cameras and camera operators at the colloquium. This means that we must capture our stream as it passes some point in their system.

Starting at the end of the Stanford stream, we could presumably act as a viewer at such a time as the material becomes available from the Stanford system. This would limit us to 100Kb/sec quality which we may regret sometime in the future. It also requires decoding the MS format stream which may not be feasible. At best it probably involves some sort of transcoding which may cause material degradation beyond that already due to MS lossy compression.

I presume that there is some sort of post lecture processing where video and sound compression occur. Before this point the uncompressed NTSC(?) signal is available from tape. I don’t know how slide material exists at this point. I have attended many Stanford seminars that were captured by the same system, I suppose. The main big screen behind the speaker would typically hold the image from the speaker’s computer. The monitors, scattered about the room would sometimes duplicate the main screen but other times show the speaker himself. At such times I don’t know whether the image of the speaker was being captured on tape. I suspect that the real time remote viewers saw what was on the local room monitors.

I think that some of the lectures are narrowcast to several remote sites in real time and in analog format, probably NTSC. We could perhaps solve some problems by capturing our materials at such a site, perhaps set up for us. That would introduce transmission noise but eliminate recording noise. I don’t know whether this is a good tradeoff.

Assuming, for the moment, that the tape captures what the monitors show, then that tape, or narrowcast, would serve our purposes about as well as possible given the plan of sharing the camera operator.


If you could merely get the slide player program in the speaker’s PC (Powerpoint?) to record the time by its own clock, when the slides were presented, Several problems would be solved and the file of slides would be much more nearly useful.

This solves the problem that the computer image from the speaker’s computer and the client’s screen are probably both capable of more than the nominal NTSC resolution of 525 lines. It would be well to capture the pixels from the computer before they are scan-converted to NTSC. Even better is to capture the computer abstractions that produced the image. On the other hand the NTSC format is 55 years old and the power Powerpoint is 5 years old. The time will soon come when NTSC is still readable but no one can boot a system capable of running Powerpoint. The quick solution to this is the new digital video standard which supports whatever resolution you need. I have hope that technology to decode that will be around longer than I expect Powerpoint to be.