It is desirable in data broadcast that, when data are updated, a server out
puts the updated data as soon as possible in order to deliver the latest da
ta to clients. When two or more data updates take place at the same time, t
he server has to choose which data to output first. We first define the lat
ency time, which is the elapsed time from the time data are updated to the
time they are broadcast. Then, we propose methods to reduce mean latency ti
me under the circumstance that each datum has various user request ratios a
nd update probabilities. Furthermore, we discuss output data selecting algo
rithms which can decrease the computational load of the server. (C) 2001 Sc
ripta Technica