scheduler.hh revision 13329
12155SN/A/* 22155SN/A * Copyright 2018 Google, Inc. 32155SN/A * 42155SN/A * Redistribution and use in source and binary forms, with or without 52155SN/A * modification, are permitted provided that the following conditions are 62155SN/A * met: redistributions of source code must retain the above copyright 72155SN/A * notice, this list of conditions and the following disclaimer; 82155SN/A * redistributions in binary form must reproduce the above copyright 92155SN/A * notice, this list of conditions and the following disclaimer in the 102155SN/A * documentation and/or other materials provided with the distribution; 112155SN/A * neither the name of the copyright holders nor the names of its 122155SN/A * contributors may be used to endorse or promote products derived from 132155SN/A * this software without specific prior written permission. 142155SN/A * 152155SN/A * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 162155SN/A * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 172155SN/A * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 182155SN/A * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 192155SN/A * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 202155SN/A * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 212155SN/A * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 222155SN/A * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 232155SN/A * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 242155SN/A * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 252155SN/A * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 262155SN/A * 272155SN/A * Authors: Gabe Black 282665Ssaidi@eecs.umich.edu */ 292665Ssaidi@eecs.umich.edu 302155SN/A#ifndef __SYSTEMC_CORE_SCHEDULER_HH__ 314202Sbinkertn@umich.edu#define __SYSTEMC_CORE_SCHEDULER_HH__ 322155SN/A 337768SAli.Saidi@ARM.com#include <functional> 347768SAli.Saidi@ARM.com#include <map> 357768SAli.Saidi@ARM.com#include <set> 362178SN/A#include <vector> 372178SN/A 382178SN/A#include "base/logging.hh" 392178SN/A#include "sim/core.hh" 402178SN/A#include "sim/eventq.hh" 412178SN/A#include "systemc/core/channel.hh" 422178SN/A#include "systemc/core/list.hh" 432178SN/A#include "systemc/core/process.hh" 442178SN/A#include "systemc/core/sched_event.hh" 452178SN/A 462178SN/Aclass Fiber; 472155SN/A 485865Sksewell@umich.edunamespace sc_gem5 496181Sksewell@umich.edu{ 506181Sksewell@umich.edu 515865Sksewell@umich.educlass TraceFile; 523918Ssaidi@eecs.umich.edu 535865Sksewell@umich.edutypedef NodeList<Process> ProcessList; 542623SN/Atypedef NodeList<Channel> ChannelList; 553918Ssaidi@eecs.umich.edu 562155SN/A/* 572155SN/A * The scheduler supports three different mechanisms, the initialization phase, 582292SN/A * delta cycles, and timed notifications. 596181Sksewell@umich.edu * 606181Sksewell@umich.edu * INITIALIZATION PHASE 613918Ssaidi@eecs.umich.edu * 622292SN/A * The initialization phase has three parts: 632292SN/A * 1. Run requested channel updates. 642292SN/A * 2. Make processes which need to initialize runnable (methods and threads 653918Ssaidi@eecs.umich.edu * which didn't have dont_initialize called on them). 662292SN/A * 3. Process delta notifications. 672292SN/A * 682766Sktlim@umich.edu * First, the Kernel SimObject calls the update() method during its startup() 692766Sktlim@umich.edu * callback which handles the requested channel updates. The Kernel also 702766Sktlim@umich.edu * schedules an event to be run at time 0 with a slightly elevated priority 712921Sktlim@umich.edu * so that it happens before any "normal" event. 722921Sktlim@umich.edu * 732766Sktlim@umich.edu * When that t0 event happens, it calls the schedulers prepareForInit method 742766Sktlim@umich.edu * which performs step 2 above. That indirectly causes the scheduler's 755529Snate@binkert.org * readyEvent to be scheduled with slightly lowered priority, ensuring it 762766Sktlim@umich.edu * happens after any "normal" event. 774762Snate@binkert.org * 782155SN/A * Because delta notifications are scheduled at the standard priority, all 792155SN/A * of those events will happen next, performing step 3 above. Once they finish, 802155SN/A * if the readyEvent was scheduled above, there shouldn't be any higher 812155SN/A * priority events in front of it. When it runs, it will start the first 822155SN/A * evaluate phase of the first delta cycle. 832155SN/A * 842766Sktlim@umich.edu * DELTA CYCLE 852155SN/A * 865865Sksewell@umich.edu * A delta cycle has three phases within it. 872155SN/A * 1. The evaluate phase where runnable processes are allowed to run. 882155SN/A * 2. The update phase where requested channel updates hapen. 892155SN/A * 3. The delta notification phase where delta notifications happen. 902155SN/A * 912178SN/A * The readyEvent runs all three steps of the delta cycle. It first goes 922178SN/A * through the list of runnable processes and executes them until the set is 937756SAli.Saidi@ARM.com * empty, and then immediately runs the update phase. Since these are all part 942766Sktlim@umich.edu * of the same event, there's no chance for other events to intervene and 952178SN/A * break the required order above. 962178SN/A * 976994Snate@binkert.org * During the update phase above, the spec forbids any action which would make 982178SN/A * a process runnable. That means that once the update phase finishes, the set 992766Sktlim@umich.edu * of runnable processes will be empty. There may, however, have been some 1002766Sktlim@umich.edu * delta notifications/timeouts which will have been scheduled during either 1012766Sktlim@umich.edu * the evaluate or update phase above. Those will have been accumulated in the 1022788Sktlim@umich.edu * scheduler, and are now all executed. 1032178SN/A * 1042733Sktlim@umich.edu * If any processes became runnable during the delta notification phase, the 1052733Sktlim@umich.edu * readyEvent will have been scheduled and will be waiting and ready to run 1062817Sksewell@umich.edu * again, effectively starting the next delta cycle. 1072733Sktlim@umich.edu * 1084486Sbinkertn@umich.edu * TIMED NOTIFICATION PHASE 1094486Sbinkertn@umich.edu * 1104776Sgblack@eecs.umich.edu * If no processes became runnable, the event queue will continue to process 1114776Sgblack@eecs.umich.edu * events until it comes across an event which represents all the timed 1128739Sgblack@eecs.umich.edu * notifications which are supposed to happen at a particular time. The object 1136365Sgblack@eecs.umich.edu * which tracks them will execute all those notifications, and then destroy 1144486Sbinkertn@umich.edu * itself. If the readyEvent is now ready to run, the next delta cycle will 1154202Sbinkertn@umich.edu * start. 1164202Sbinkertn@umich.edu * 1174202Sbinkertn@umich.edu * PAUSE/STOP 1188541Sgblack@eecs.umich.edu * 1194202Sbinkertn@umich.edu * To inject a pause from sc_pause which should happen after the current delta 1204202Sbinkertn@umich.edu * cycle's delta notification phase, an event is scheduled with a lower than 1214776Sgblack@eecs.umich.edu * normal priority, but higher than the readyEvent. That ensures that any 1228739Sgblack@eecs.umich.edu * delta notifications which are scheduled with normal priority will happen 1236365Sgblack@eecs.umich.edu * first, since those are part of the current delta cycle. Then the pause 1244202Sbinkertn@umich.edu * event will happen before the next readyEvent which would start the next 1258777Sgblack@eecs.umich.edu * delta cycle. All of these events are scheduled for the current time, and so 1264202Sbinkertn@umich.edu * would happen before any timed notifications went off. 1274202Sbinkertn@umich.edu * 1284202Sbinkertn@umich.edu * To inject a stop from sc_stop, the delta cycles should stop before even the 1295217Ssaidi@eecs.umich.edu * delta notifications have happened, but after the evaluate and update phases. 1304202Sbinkertn@umich.edu * For that, a stop event with slightly higher than normal priority will be 1312155SN/A * scheduled so that it happens before any of the delta notification events 1324202Sbinkertn@umich.edu * which are at normal priority. 1334776Sgblack@eecs.umich.edu * 1344776Sgblack@eecs.umich.edu * MAX RUN TIME 1354776Sgblack@eecs.umich.edu * 1364776Sgblack@eecs.umich.edu * When sc_start is called, it's possible to pass in a maximum time the 1372766Sktlim@umich.edu * simulation should run to, at which point sc_pause is implicitly called. The 1384202Sbinkertn@umich.edu * simulation is supposed to run up to the latest timed notification phase 1398335Snate@binkert.org * which is less than or equal to the maximum time. In other words it should 1402733Sktlim@umich.edu * run timed notifications at the maximum time, but not the subsequent evaluate 1412733Sktlim@umich.edu * phase. That's implemented by scheduling an event at the max time with a 1422733Sktlim@umich.edu * priority which is lower than all the others except the ready event. Timed 1432733Sktlim@umich.edu * notifications will happen before it fires, but it will override any ready 1442733Sktlim@umich.edu * event and prevent the evaluate phase from starting. 1452874Sktlim@umich.edu */ 1462874Sktlim@umich.edu 1472874Sktlim@umich.educlass Scheduler 1484202Sbinkertn@umich.edu{ 1492733Sktlim@umich.edu public: 1505192Ssaidi@eecs.umich.edu typedef std::list<ScEvent *> ScEvents; 1518335Snate@binkert.org 1528335Snate@binkert.org class TimeSlot : public ::Event 1538335Snate@binkert.org { 1548335Snate@binkert.org public: 1558335Snate@binkert.org TimeSlot() : ::Event(Default_Pri, AutoDelete) {} 1568335Snate@binkert.org 1578335Snate@binkert.org ScEvents events; 1588335Snate@binkert.org void process(); 1598335Snate@binkert.org }; 1608335Snate@binkert.org 1618335Snate@binkert.org typedef std::map<Tick, TimeSlot *> TimeSlots; 1628335Snate@binkert.org 1638335Snate@binkert.org Scheduler(); 1648335Snate@binkert.org ~Scheduler(); 1658335Snate@binkert.org 1668335Snate@binkert.org void clear(); 1678335Snate@binkert.org 1688335Snate@binkert.org const std::string name() const { return "systemc_scheduler"; } 1698335Snate@binkert.org 1708335Snate@binkert.org uint64_t numCycles() { return _numCycles; } 1718335Snate@binkert.org Process *current() { return _current; } 1728335Snate@binkert.org 1738335Snate@binkert.org void initPhase(); 1748335Snate@binkert.org 1758471SGiacomo.Gabrielli@arm.com // Register a process with the scheduler. 1768335Snate@binkert.org void reg(Process *p); 1778335Snate@binkert.org 1785192Ssaidi@eecs.umich.edu // Run the next process, if there is one. 1798232Snate@binkert.org void yield(); 1808232Snate@binkert.org 1818232Snate@binkert.org // Put a process on the ready list. 1828300Schander.sudanthi@arm.com void ready(Process *p); 1838300Schander.sudanthi@arm.com 1845192Ssaidi@eecs.umich.edu // Mark a process as ready if init is finished, or put it on the list of 1858300Schander.sudanthi@arm.com // processes to be initialized. 1868300Schander.sudanthi@arm.com void resume(Process *p); 1876036Sksewell@umich.edu 1888300Schander.sudanthi@arm.com // Remove a process from the ready/init list if it was on one of them, and 1898300Schander.sudanthi@arm.com // return if it was. 190 bool suspend(Process *p); 191 192 // Schedule an update for a given channel. 193 void requestUpdate(Channel *c); 194 195 // Run the given process immediately, preempting whatever may be running. 196 void 197 runNow(Process *p) 198 { 199 // This function may put a process on the wrong list, ie a thread 200 // the method list. That's fine since that's just a performance 201 // optimization, and the important thing here is how the processes are 202 // ordered. 203 204 // If a process is running, schedule it/us to run again. 205 if (_current) 206 readyListMethods.pushFirst(_current); 207 // Schedule p to run first. 208 readyListMethods.pushFirst(p); 209 yield(); 210 } 211 212 // Run this process at the next opportunity. 213 void 214 runNext(Process *p) 215 { 216 // Like above, it's ok if this isn't a method. Putting it on this list 217 // just gives it priority. 218 readyListMethods.pushFirst(p); 219 if (!inEvaluate()) 220 scheduleReadyEvent(); 221 } 222 223 // Set an event queue for scheduling events. 224 void setEventQueue(EventQueue *_eq) { eq = _eq; } 225 226 // Get the current time according to gem5. 227 Tick getCurTick() { return eq ? eq->getCurTick() : 0; } 228 229 Tick 230 delayed(const ::sc_core::sc_time &delay) 231 { 232 return getCurTick() + delay.value(); 233 } 234 235 // For scheduling delayed/timed notifications/timeouts. 236 void 237 schedule(ScEvent *event, const ::sc_core::sc_time &delay) 238 { 239 Tick tick = delayed(delay); 240 if (tick < getCurTick()) 241 tick = getCurTick(); 242 243 // Delta notification/timeout. 244 if (delay.value() == 0) { 245 event->schedule(deltas, tick); 246 if (!inEvaluate() && !inUpdate()) 247 scheduleReadyEvent(); 248 return; 249 } 250 251 // Timed notification/timeout. 252 TimeSlot *&ts = timeSlots[tick]; 253 if (!ts) { 254 ts = new TimeSlot; 255 schedule(ts, tick); 256 } 257 event->schedule(ts->events, tick); 258 } 259 260 // For descheduling delayed/timed notifications/timeouts. 261 void 262 deschedule(ScEvent *event) 263 { 264 ScEvents *on = event->scheduledOn(); 265 266 if (on == &deltas) { 267 event->deschedule(); 268 return; 269 } 270 271 // Timed notification/timeout. 272 auto tsit = timeSlots.find(event->when()); 273 panic_if(tsit == timeSlots.end(), 274 "Descheduling event at time with no events."); 275 TimeSlot *ts = tsit->second; 276 ScEvents &events = ts->events; 277 assert(on == &events); 278 event->deschedule(); 279 280 // If no more events are happening at this time slot, get rid of it. 281 if (events.empty()) { 282 deschedule(ts); 283 timeSlots.erase(tsit); 284 } 285 } 286 287 void 288 completeTimeSlot(TimeSlot *ts) 289 { 290 assert(ts == timeSlots.begin()->second); 291 timeSlots.erase(timeSlots.begin()); 292 if (!runToTime && starved()) 293 scheduleStarvationEvent(); 294 scheduleTimeAdvancesEvent(); 295 } 296 297 // Pending activity ignores gem5 activity, much like how a systemc 298 // simulation wouldn't know about asynchronous external events (socket IO 299 // for instance) that might happen before time advances in a pure 300 // systemc simulation. Also the spec lists what specific types of pending 301 // activity needs to be counted, which obviously doesn't include gem5 302 // events. 303 304 // Return whether there's pending systemc activity at this time. 305 bool 306 pendingCurr() 307 { 308 return !readyListMethods.empty() || !readyListThreads.empty() || 309 !updateList.empty() || !deltas.empty(); 310 } 311 312 // Return whether there are pending timed notifications or timeouts. 313 bool 314 pendingFuture() 315 { 316 return !timeSlots.empty(); 317 } 318 319 // Return how many ticks there are until the first pending event, if any. 320 Tick 321 timeToPending() 322 { 323 if (pendingCurr()) 324 return 0; 325 if (pendingFuture()) 326 return timeSlots.begin()->first - getCurTick(); 327 return MaxTick - getCurTick(); 328 } 329 330 // Run scheduled channel updates. 331 void runUpdate(); 332 333 // Run delta events. 334 void runDelta(); 335 336 void setScMainFiber(Fiber *sc_main) { scMain = sc_main; } 337 338 void start(Tick max_tick, bool run_to_time); 339 void oneCycle(); 340 341 void schedulePause(); 342 void scheduleStop(bool finish_delta); 343 344 enum Status 345 { 346 StatusOther = 0, 347 StatusEvaluate, 348 StatusUpdate, 349 StatusDelta, 350 StatusTiming, 351 StatusPaused, 352 StatusStopped 353 }; 354 355 bool elaborationDone() { return _elaborationDone; } 356 void elaborationDone(bool b) { _elaborationDone = b; } 357 358 bool paused() { return status() == StatusPaused; } 359 bool stopped() { return status() == StatusStopped; } 360 bool inEvaluate() { return status() == StatusEvaluate; } 361 bool inUpdate() { return status() == StatusUpdate; } 362 bool inDelta() { return status() == StatusDelta; } 363 bool inTiming() { return status() == StatusTiming; } 364 365 uint64_t changeStamp() { return _changeStamp; } 366 void stepChangeStamp() { _changeStamp++; } 367 368 void throwToScMain(); 369 370 Status status() { return _status; } 371 void status(Status s) { _status = s; } 372 373 void registerTraceFile(TraceFile *tf) { traceFiles.insert(tf); } 374 void unregisterTraceFile(TraceFile *tf) { traceFiles.erase(tf); } 375 376 private: 377 typedef const EventBase::Priority Priority; 378 static Priority DefaultPriority = EventBase::Default_Pri; 379 380 static Priority StopPriority = DefaultPriority - 1; 381 static Priority PausePriority = DefaultPriority + 1; 382 static Priority MaxTickPriority = DefaultPriority + 2; 383 static Priority ReadyPriority = DefaultPriority + 3; 384 static Priority StarvationPriority = ReadyPriority; 385 static Priority TimeAdvancesPriority = EventBase::Maximum_Pri; 386 387 EventQueue *eq; 388 389 // For gem5 style events. 390 void 391 schedule(::Event *event, Tick tick) 392 { 393 if (initDone) 394 eq->schedule(event, tick); 395 else 396 eventsToSchedule[event] = tick; 397 } 398 399 void schedule(::Event *event) { schedule(event, getCurTick()); } 400 401 void 402 deschedule(::Event *event) 403 { 404 if (initDone) 405 eq->deschedule(event); 406 else 407 eventsToSchedule.erase(event); 408 } 409 410 ScEvents deltas; 411 TimeSlots timeSlots; 412 413 Process * 414 getNextReady() 415 { 416 Process *p = readyListMethods.getNext(); 417 return p ? p : readyListThreads.getNext(); 418 } 419 420 void runReady(); 421 EventWrapper<Scheduler, &Scheduler::runReady> readyEvent; 422 void scheduleReadyEvent(); 423 424 void pause(); 425 void stop(); 426 EventWrapper<Scheduler, &Scheduler::pause> pauseEvent; 427 EventWrapper<Scheduler, &Scheduler::stop> stopEvent; 428 429 Fiber *scMain; 430 const ::sc_core::sc_report *_throwToScMain; 431 432 bool 433 starved() 434 { 435 return (readyListMethods.empty() && readyListThreads.empty() && 436 updateList.empty() && deltas.empty() && 437 (timeSlots.empty() || timeSlots.begin()->first > maxTick) && 438 initList.empty()); 439 } 440 EventWrapper<Scheduler, &Scheduler::pause> starvationEvent; 441 void scheduleStarvationEvent(); 442 443 bool _elaborationDone; 444 bool _started; 445 bool _stopNow; 446 447 Status _status; 448 449 Tick maxTick; 450 Tick lastReadyTick; 451 void 452 maxTickFunc() 453 { 454 if (lastReadyTick != getCurTick()) 455 _changeStamp++; 456 pause(); 457 } 458 EventWrapper<Scheduler, &Scheduler::maxTickFunc> maxTickEvent; 459 460 void timeAdvances() { trace(false); } 461 EventWrapper<Scheduler, &Scheduler::timeAdvances> timeAdvancesEvent; 462 void 463 scheduleTimeAdvancesEvent() 464 { 465 if (!traceFiles.empty() && !timeAdvancesEvent.scheduled()) 466 schedule(&timeAdvancesEvent); 467 } 468 469 uint64_t _numCycles; 470 uint64_t _changeStamp; 471 472 Process *_current; 473 474 bool initDone; 475 bool runToTime; 476 bool runOnce; 477 478 ProcessList initList; 479 480 ProcessList readyListMethods; 481 ProcessList readyListThreads; 482 483 ChannelList updateList; 484 485 std::map<::Event *, Tick> eventsToSchedule; 486 487 std::set<TraceFile *> traceFiles; 488 489 void trace(bool delta); 490}; 491 492extern Scheduler scheduler; 493 494// A proxy function to avoid having to expose the scheduler in header files. 495Process *getCurrentProcess(); 496 497inline void 498Scheduler::TimeSlot::process() 499{ 500 scheduler.stepChangeStamp(); 501 scheduler.status(StatusTiming); 502 503 try { 504 while (!events.empty()) 505 events.front()->run(); 506 } catch (...) { 507 if (events.empty()) 508 scheduler.completeTimeSlot(this); 509 else 510 scheduler.schedule(this); 511 scheduler.throwToScMain(); 512 } 513 514 scheduler.status(StatusOther); 515 scheduler.completeTimeSlot(this); 516} 517 518const ::sc_core::sc_report reportifyException(); 519 520} // namespace sc_gem5 521 522#endif // __SYSTEMC_CORE_SCHEDULER_H__ 523