Jump to content
  • Sky
  • Blueberry
  • Slate
  • Blackcurrant
  • Watermelon
  • Strawberry
  • Orange
  • Banana
  • Apple
  • Emerald
  • Chocolate
  • Charcoal

evg-zhabotinsky

Members
  • Content Count

    7
  • Joined

  • Last visited

Everything posted by evg-zhabotinsky

  1. Finally returned to the project. It sort-of works in-game already. Currently often breaks persistence when last yield was forced close to a Java function call, JNLua internal functions somehow end up in the persisted state and prevent persisting. Otherwise no problems noticed, save for a single very weird segfault on world save that I could not reproduce. The issue was created here: https://github.com/MightyPirates/OpenComputers/issues/3036
  2. Actually, I think PR is the proper way to manage such subprojects. It's not just a transaction from one branch to another, more like a dedicated single-issue single-branch repository, linking code changes and the related discussion together. Did you know that you can even create pull requests to other pull requests? And when all is done, the PR branch can be squashed and rebased onto the target, and the original commit history should still remain attached to the discussion. (At least that's how it worked in Gitlab when I last checked.) Buuut the repo isn't mine, so I won't insist on
  3. How about a way to execute gc code after gc is done with? That is, no user code at all runs during gc. Only grand total of 9 short lines of machine.lua code per hooked object are ran. The hooked objects can't be collected until the next gc pass anyway, so we might as well quickly dump them into a list and process their hooks immediately after GC finishes. (Or at least before any other user code runs, yielding hooks allow to do that.) Anyway, I need to get __gc hooks out of the way to continue working on machine.lua upgrade, so I guess I'm leaving only the setmetatable() stub for now. The
  4. I've rebuilt the mod with a freshly built native Lua library, and my patch seems to work in-game. I'm currently updating machine.lua to use the new functionality, and looks like it's going to be a massive rework. For example, user-provided GC hooks would be hard to update properly (and the current implementation is bugged), but I think I came up with a brand new implementation that can even be enabled by default! See this gist for the code. What I wanted to ask right now is, how hard would it be to push an almost complete rework of machine.lua upstream? P.S. Is "Requests"
  5. Yes, looks like it, now that I've taken a closer look at Eris. It even has a special error message for hook-yielded coroutines. However, hook-yielded coroutines have an almost normal Lua frame at the top of the stack, and with slight modifications I seem to have managed to persist them. See https://github.com/fnuecke/eris/pull/30
  6. I'd like to clarify that bit. Do you mean just stopping a machine while it's running? Yielding from Lua debug hook? Neither of the two is what I suggested. Moreover, the latter is mostly impossible: the Lua hook function cannot yield across a C boundary, which is the underlying C hook, and even that cannot yield arbitrarily. I'm pretty sure that I got this correct by now. Line and count hooks are executed in the middle of instruction fetch, and if it yields in the end, it undoes the part of fetch that has already been done. (See vmfetch in lvm.c and luaG_traceexec in ldebug.c) Also, looki
  7. This is a suggestion on how and why to rework OC's sysyield timeouts. The required fixes basically give ability to implement preemptive multitasking for free. 1. Current implementation of sysyield timeouts is insufficient Currently, it is not a real protection against executor thread hogging. It is only a safeguard, and a very efficient one. Consider the following code: local function dos() while true do end end while true do pcall(coroutine.resume, coroutine.create(dos)) computer.pullSignal(0) end Flash it onto a EEPROM, and throw that into a bare T1 case with j
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Privacy Policy.