Jump to content
  • Sky
  • Blueberry
  • Slate
  • Blackcurrant
  • Watermelon
  • Strawberry
  • Orange
  • Banana
  • Apple
  • Emerald
  • Chocolate
  • Charcoal

Change how screen-writing events are 'priced'?

Recommended Posts

As I understand it, the reason there's a limit on the number of times you can call certain GPU functions per tick is bandwidth, because everyone in the vicinity of the screen has to get an update on what it's now displaying. (The reason different GPUs have different limits is to encourage upgrading, I further presume.) However, the limits seem a bit nonsensical. It costs exactly as much time to use set() to print 50 lines of 160 characters as it does to use set() 50 times to print 50 characters. The setBackground() and setForeground() functions cost time even though they don't actually change the screen at all, and therefore don't use bandwidth. This means that printing a line of characters in rainbow colors costs much more time than printing a black and white line and makes color graphics in general slow and unwieldy.


So I'd like to request a change. Have the gpu.set() function route through some sort of base 'put a character with a specific foreground and background color on the screen' function. Have THAT limit calls per tick. (Since the gpu.set() tiers are currently 4/8/16 and the maximum number of characters a single call can set are 50/80/160, make the limits 200/640/2560?) Make setBackground() and setForeground() free. copy() and fill() can stay as they are, since I presume you have the client do the work on those to save bandwidth. If you're concerned about overhead per call, you could have screen updates only get sent every tick. (You may already do this. I dunno.)


I'd love to write a high-level 'pixel graphics' library, making it easier for folks to do things like draw lines on the screen and move 'sprites' around, but currently it'd only be even slightly feasible in black and white. (The double-buffering suggestion previously made would also help.) I hope my suggestion has merit.

Link to post
Share on other sites

Your basic assumption that setting colors does not generate network traffic is wrong, I'm afraid. It does. Imagine it like the foreground an background color of your "brush" in your common paint program. That state gets synchronized to the client when it is changed. Precisely to avoid having to send the color together with each set and fill command, which, in the common use-case, saves a lot of bandwidth.

Link to post
Share on other sites

Yeah, I admit I don't really understand how the code works here. I was all prepared to make a big argument about how, worst-case scenario, a Tier 3 screen/GPU combo doing a full-color display could take 125 seconds to update the screen, assuming that it would take (call setBackground = 1/8 of a tick) + (call setForeground = 1/8 of a tick) + (call set = 1/16 of a tick) = 5/16 ticks to update each of the 8,000 characters. However, actually trying it gave me a figure around 50 seconds, which suggests I get eight setBackground calls, eight setForeground calls, and sixteen set calls per tick (instead of, as I expected, each advancing some common fractional tick meter).


Still, 50 seconds is pretty darn high, especially when I consider that, without color, I can update a screen of the same size in black and white 6.8 times per second! (I just use a single set command for each line.) It really makes me not want to ever use color at all. Is there something you can do to improve the situation?


I have two possible suggestions. One is that calls to make changes to the screen display update a delta frame. You'd have a by-tier limit on the size of said delta frame, and when a call exceeds that, it'd be made an indirect call. Then you push the delta frame out each tick. This should make lots of short set() outputs 'cost', time-wise, closer to a few long ones. The other is to provide some way to 'pre-cook' a full color line the way you can currently prepare a string for a full row's worth of output. Even if you're sending three times the data--byte for foreground color, byte for background color, byte for Code Page 437 entry--you could still redraw the screen about 2.3 times a second doing that.


What are your thoughts?

Link to post
Share on other sites

Yeah, the limits are separate per method.

While making the limit depend on the actual raw delta generated would indeed be ideal, it's not possible to seamlessly integreate this with how call limits are handled in OC currently (which are just simple counters). I have some ideas on making the limits dynamic in the future (either via reflected callbacks defined in the callback annotation or using an interface that can be added to components), but those will need careful evaluation to make sure they don't butcher performance. Until then, I'm afraid there's no sane way to implement this.

Link to post
Share on other sites

I wonder why screen updates take so much bandwith? After all all you need to send is the screens content (easy way) or the delta (more difficult but even less data). But even sending the complete content is a minor amount of data that can as well be easily compressed to less what fits in 1-2 TCP/IP packets. Now if you limit those updates to 2 or 4 times a second the bandwith would be almost non-existent.

So much for my imagination, what are the real reasons those had to be limited? ;)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Privacy Policy.