dram_ctrl.hh revision 10394
111308Santhony.gutierrez@amd.com/* 211308Santhony.gutierrez@amd.com * Copyright (c) 2012-2014 ARM Limited 311308Santhony.gutierrez@amd.com * All rights reserved 411308Santhony.gutierrez@amd.com * 511308Santhony.gutierrez@amd.com * The license below extends only to copyright in the software and shall 611308Santhony.gutierrez@amd.com * not be construed as granting a license to any other intellectual 711308Santhony.gutierrez@amd.com * property including but not limited to intellectual property relating 811308Santhony.gutierrez@amd.com * to a hardware implementation of the functionality of the software 911308Santhony.gutierrez@amd.com * licensed hereunder. You may use the software subject to the license 1011308Santhony.gutierrez@amd.com * terms below provided that you ensure that this notice is replicated 1111308Santhony.gutierrez@amd.com * unmodified and in its entirety in all distributions of the software, 1211308Santhony.gutierrez@amd.com * modified or unmodified, in source code or in binary form. 1311308Santhony.gutierrez@amd.com * 1411308Santhony.gutierrez@amd.com * Copyright (c) 2013 Amin Farmahini-Farahani 1511308Santhony.gutierrez@amd.com * All rights reserved. 1611308Santhony.gutierrez@amd.com * 1712697Santhony.gutierrez@amd.com * Redistribution and use in source and binary forms, with or without 1812697Santhony.gutierrez@amd.com * modification, are permitted provided that the following conditions are 1912697Santhony.gutierrez@amd.com * met: redistributions of source code must retain the above copyright 2011308Santhony.gutierrez@amd.com * notice, this list of conditions and the following disclaimer; 2111308Santhony.gutierrez@amd.com * redistributions in binary form must reproduce the above copyright 2211308Santhony.gutierrez@amd.com * notice, this list of conditions and the following disclaimer in the 2311308Santhony.gutierrez@amd.com * documentation and/or other materials provided with the distribution; 2411308Santhony.gutierrez@amd.com * neither the name of the copyright holders nor the names of its 2511308Santhony.gutierrez@amd.com * contributors may be used to endorse or promote products derived from 2611308Santhony.gutierrez@amd.com * this software without specific prior written permission. 2711308Santhony.gutierrez@amd.com * 2811308Santhony.gutierrez@amd.com * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 2911308Santhony.gutierrez@amd.com * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 3011308Santhony.gutierrez@amd.com * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 3111308Santhony.gutierrez@amd.com * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 3211308Santhony.gutierrez@amd.com * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 3312697Santhony.gutierrez@amd.com * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 3411308Santhony.gutierrez@amd.com * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 3511308Santhony.gutierrez@amd.com * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 3611308Santhony.gutierrez@amd.com * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 3711308Santhony.gutierrez@amd.com * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 3811308Santhony.gutierrez@amd.com * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 3911308Santhony.gutierrez@amd.com * 4011308Santhony.gutierrez@amd.com * Authors: Andreas Hansson 4111308Santhony.gutierrez@amd.com * Ani Udipi 4211308Santhony.gutierrez@amd.com * Neha Agarwal 4311308Santhony.gutierrez@amd.com */ 4411308Santhony.gutierrez@amd.com 4511308Santhony.gutierrez@amd.com/** 4611308Santhony.gutierrez@amd.com * @file 4711308Santhony.gutierrez@amd.com * DRAMCtrl declaration 4811308Santhony.gutierrez@amd.com */ 4911308Santhony.gutierrez@amd.com 5011308Santhony.gutierrez@amd.com#ifndef __MEM_DRAM_CTRL_HH__ 5111308Santhony.gutierrez@amd.com#define __MEM_DRAM_CTRL_HH__ 5211900Sandreas.sandberg@arm.com 5311308Santhony.gutierrez@amd.com#include <deque> 5411308Santhony.gutierrez@amd.com 5511308Santhony.gutierrez@amd.com#include "base/statistics.hh" 5611308Santhony.gutierrez@amd.com#include "enums/AddrMap.hh" 5711308Santhony.gutierrez@amd.com#include "enums/MemSched.hh" 5811308Santhony.gutierrez@amd.com#include "enums/PageManage.hh" 5911308Santhony.gutierrez@amd.com#include "mem/abstract_mem.hh" 6011308Santhony.gutierrez@amd.com#include "mem/qport.hh" 6111308Santhony.gutierrez@amd.com#include "params/DRAMCtrl.hh" 6211308Santhony.gutierrez@amd.com#include "sim/eventq.hh" 6311308Santhony.gutierrez@amd.com 6411308Santhony.gutierrez@amd.com/** 6511308Santhony.gutierrez@amd.com * The DRAM controller is a single-channel memory controller capturing 6611308Santhony.gutierrez@amd.com * the most important timing constraints associated with a 6711308Santhony.gutierrez@amd.com * contemporary DRAM. For multi-channel memory systems, the controller 6811308Santhony.gutierrez@amd.com * is combined with a crossbar model, with the channel address 6911308Santhony.gutierrez@amd.com * interleaving taking part in the crossbar. 7011308Santhony.gutierrez@amd.com * 7111308Santhony.gutierrez@amd.com * As a basic design principle, this controller 7211308Santhony.gutierrez@amd.com * model is not cycle callable, but instead uses events to: 1) decide 7311308Santhony.gutierrez@amd.com * when new decisions can be made, 2) when resources become available, 7411308Santhony.gutierrez@amd.com * 3) when things are to be considered done, and 4) when to send 7511308Santhony.gutierrez@amd.com * things back. Through these simple principles, the model delivers 7611689Santhony.gutierrez@amd.com * high performance, and lots of flexibility, allowing users to 7711689Santhony.gutierrez@amd.com * evaluate the system impact of a wide range of memory technologies, 7811689Santhony.gutierrez@amd.com * such as DDR3/4, LPDDR2/3/4, WideIO1/2, HBM and HMC. 7911689Santhony.gutierrez@amd.com * 8011689Santhony.gutierrez@amd.com * For more details, please see Hansson et al, "Simulating DRAM 8111689Santhony.gutierrez@amd.com * controllers for future system architecture exploration", 8211689Santhony.gutierrez@amd.com * Proc. ISPASS, 2014. If you use this model as part of your research 8311689Santhony.gutierrez@amd.com * please cite the paper. 8411689Santhony.gutierrez@amd.com */ 8511689Santhony.gutierrez@amd.comclass DRAMCtrl : public AbstractMemory 8611689Santhony.gutierrez@amd.com{ 8711689Santhony.gutierrez@amd.com 8811689Santhony.gutierrez@amd.com private: 8911689Santhony.gutierrez@amd.com 9011689Santhony.gutierrez@amd.com // For now, make use of a queued slave port to avoid dealing with 9111689Santhony.gutierrez@amd.com // flow control for the responses being sent back 9211689Santhony.gutierrez@amd.com class MemoryPort : public QueuedSlavePort 9311689Santhony.gutierrez@amd.com { 9411308Santhony.gutierrez@amd.com 9511308Santhony.gutierrez@amd.com SlavePacketQueue queue; 9611308Santhony.gutierrez@amd.com DRAMCtrl& memory; 9711308Santhony.gutierrez@amd.com 9811308Santhony.gutierrez@amd.com public: 9911308Santhony.gutierrez@amd.com 10011308Santhony.gutierrez@amd.com MemoryPort(const std::string& name, DRAMCtrl& _memory); 10111308Santhony.gutierrez@amd.com 10211308Santhony.gutierrez@amd.com protected: 10311308Santhony.gutierrez@amd.com 10411308Santhony.gutierrez@amd.com Tick recvAtomic(PacketPtr pkt); 10511308Santhony.gutierrez@amd.com 10611308Santhony.gutierrez@amd.com void recvFunctional(PacketPtr pkt); 10711308Santhony.gutierrez@amd.com 10811308Santhony.gutierrez@amd.com bool recvTimingReq(PacketPtr); 10911308Santhony.gutierrez@amd.com 11011308Santhony.gutierrez@amd.com virtual AddrRangeList getAddrRanges() const; 11111308Santhony.gutierrez@amd.com 11211308Santhony.gutierrez@amd.com }; 11311308Santhony.gutierrez@amd.com 11411308Santhony.gutierrez@amd.com /** 11511308Santhony.gutierrez@amd.com * Our incoming port, for a multi-ported controller add a crossbar 11611308Santhony.gutierrez@amd.com * in front of it 11711308Santhony.gutierrez@amd.com */ 11811308Santhony.gutierrez@amd.com MemoryPort port; 11911308Santhony.gutierrez@amd.com 12011308Santhony.gutierrez@amd.com /** 12111308Santhony.gutierrez@amd.com * Remember if we have to retry a request when available. 12211308Santhony.gutierrez@amd.com */ 12311308Santhony.gutierrez@amd.com bool retryRdReq; 12411308Santhony.gutierrez@amd.com bool retryWrReq; 12511308Santhony.gutierrez@amd.com 12611308Santhony.gutierrez@amd.com /** 12711308Santhony.gutierrez@amd.com * Bus state used to control the read/write switching and drive 12811308Santhony.gutierrez@amd.com * the scheduling of the next request. 12911308Santhony.gutierrez@amd.com */ 13011308Santhony.gutierrez@amd.com enum BusState { 13111308Santhony.gutierrez@amd.com READ = 0, 13211308Santhony.gutierrez@amd.com READ_TO_WRITE, 13311308Santhony.gutierrez@amd.com WRITE, 13411308Santhony.gutierrez@amd.com WRITE_TO_READ 13511308Santhony.gutierrez@amd.com }; 13611308Santhony.gutierrez@amd.com 13711308Santhony.gutierrez@amd.com BusState busState; 13811308Santhony.gutierrez@amd.com 13911308Santhony.gutierrez@amd.com /** List to keep track of activate ticks */ 14011308Santhony.gutierrez@amd.com std::vector<std::deque<Tick>> actTicks; 14111308Santhony.gutierrez@amd.com 14211308Santhony.gutierrez@amd.com /** 14311308Santhony.gutierrez@amd.com * A basic class to track the bank state, i.e. what row is 14411308Santhony.gutierrez@amd.com * currently open (if any), when is the bank free to accept a new 14511308Santhony.gutierrez@amd.com * column (read/write) command, when can it be precharged, and 14611308Santhony.gutierrez@amd.com * when can it be activated. 14711308Santhony.gutierrez@amd.com * 14811308Santhony.gutierrez@amd.com * The bank also keeps track of how many bytes have been accessed 14911308Santhony.gutierrez@amd.com * in the open row since it was opened. 15011308Santhony.gutierrez@amd.com */ 15111308Santhony.gutierrez@amd.com class Bank 15211308Santhony.gutierrez@amd.com { 15311308Santhony.gutierrez@amd.com 15411308Santhony.gutierrez@amd.com public: 15511308Santhony.gutierrez@amd.com 15611308Santhony.gutierrez@amd.com static const uint32_t NO_ROW = -1; 15711308Santhony.gutierrez@amd.com 15811308Santhony.gutierrez@amd.com uint32_t openRow; 15911308Santhony.gutierrez@amd.com uint8_t rank; 16011308Santhony.gutierrez@amd.com uint8_t bank; 16111308Santhony.gutierrez@amd.com uint8_t bankgr; 16211308Santhony.gutierrez@amd.com 16311308Santhony.gutierrez@amd.com Tick colAllowedAt; 16411308Santhony.gutierrez@amd.com Tick preAllowedAt; 16511308Santhony.gutierrez@amd.com Tick actAllowedAt; 16611308Santhony.gutierrez@amd.com 16711308Santhony.gutierrez@amd.com uint32_t rowAccesses; 16811308Santhony.gutierrez@amd.com uint32_t bytesAccessed; 16911308Santhony.gutierrez@amd.com 17011308Santhony.gutierrez@amd.com Bank() : 17111308Santhony.gutierrez@amd.com openRow(NO_ROW), rank(0), bank(0), bankgr(0), 17211308Santhony.gutierrez@amd.com colAllowedAt(0), preAllowedAt(0), actAllowedAt(0), 17311308Santhony.gutierrez@amd.com rowAccesses(0), bytesAccessed(0) 17411308Santhony.gutierrez@amd.com { } 17511308Santhony.gutierrez@amd.com }; 17611308Santhony.gutierrez@amd.com 17711308Santhony.gutierrez@amd.com /** 17811308Santhony.gutierrez@amd.com * A burst helper helps organize and manage a packet that is larger than 17911308Santhony.gutierrez@amd.com * the DRAM burst size. A system packet that is larger than the burst size 18011308Santhony.gutierrez@amd.com * is split into multiple DRAM packets and all those DRAM packets point to 18111308Santhony.gutierrez@amd.com * a single burst helper such that we know when the whole packet is served. 18211308Santhony.gutierrez@amd.com */ 18311308Santhony.gutierrez@amd.com class BurstHelper { 18411308Santhony.gutierrez@amd.com 18511308Santhony.gutierrez@amd.com public: 18611308Santhony.gutierrez@amd.com 18711308Santhony.gutierrez@amd.com /** Number of DRAM bursts requred for a system packet **/ 18811308Santhony.gutierrez@amd.com const unsigned int burstCount; 18911308Santhony.gutierrez@amd.com 19011308Santhony.gutierrez@amd.com /** Number of DRAM bursts serviced so far for a system packet **/ 19111308Santhony.gutierrez@amd.com unsigned int burstsServiced; 19211308Santhony.gutierrez@amd.com 19311308Santhony.gutierrez@amd.com BurstHelper(unsigned int _burstCount) 19411308Santhony.gutierrez@amd.com : burstCount(_burstCount), burstsServiced(0) 19511308Santhony.gutierrez@amd.com { } 19611308Santhony.gutierrez@amd.com }; 19711308Santhony.gutierrez@amd.com 19811308Santhony.gutierrez@amd.com /** 19911308Santhony.gutierrez@amd.com * A DRAM packet stores packets along with the timestamp of when 20011308Santhony.gutierrez@amd.com * the packet entered the queue, and also the decoded address. 20111308Santhony.gutierrez@amd.com */ 20211308Santhony.gutierrez@amd.com class DRAMPacket { 20311308Santhony.gutierrez@amd.com 20411308Santhony.gutierrez@amd.com public: 20511308Santhony.gutierrez@amd.com 20611308Santhony.gutierrez@amd.com /** When did request enter the controller */ 20711308Santhony.gutierrez@amd.com const Tick entryTime; 20811308Santhony.gutierrez@amd.com 20911308Santhony.gutierrez@amd.com /** When will request leave the controller */ 21011308Santhony.gutierrez@amd.com Tick readyTime; 21111308Santhony.gutierrez@amd.com 21211308Santhony.gutierrez@amd.com /** This comes from the outside world */ 21311308Santhony.gutierrez@amd.com const PacketPtr pkt; 21411308Santhony.gutierrez@amd.com 21511308Santhony.gutierrez@amd.com const bool isRead; 21611308Santhony.gutierrez@amd.com 21711308Santhony.gutierrez@amd.com /** Will be populated by address decoder */ 21811308Santhony.gutierrez@amd.com const uint8_t rank; 21911308Santhony.gutierrez@amd.com const uint8_t bank; 22011308Santhony.gutierrez@amd.com const uint32_t row; 22111308Santhony.gutierrez@amd.com 22211308Santhony.gutierrez@amd.com /** 22311308Santhony.gutierrez@amd.com * Bank id is calculated considering banks in all the ranks 22411308Santhony.gutierrez@amd.com * eg: 2 ranks each with 8 banks, then bankId = 0 --> rank0, bank0 and 22511308Santhony.gutierrez@amd.com * bankId = 8 --> rank1, bank0 22611308Santhony.gutierrez@amd.com */ 22711308Santhony.gutierrez@amd.com const uint16_t bankId; 22811308Santhony.gutierrez@amd.com 22911308Santhony.gutierrez@amd.com /** 23011308Santhony.gutierrez@amd.com * The starting address of the DRAM packet. 23111308Santhony.gutierrez@amd.com * This address could be unaligned to burst size boundaries. The 23211308Santhony.gutierrez@amd.com * reason is to keep the address offset so we can accurately check 23311308Santhony.gutierrez@amd.com * incoming read packets with packets in the write queue. 23411308Santhony.gutierrez@amd.com */ 23511308Santhony.gutierrez@amd.com Addr addr; 23611308Santhony.gutierrez@amd.com 23711308Santhony.gutierrez@amd.com /** 23811308Santhony.gutierrez@amd.com * The size of this dram packet in bytes 23911308Santhony.gutierrez@amd.com * It is always equal or smaller than DRAM burst size 24011308Santhony.gutierrez@amd.com */ 24111308Santhony.gutierrez@amd.com unsigned int size; 24211308Santhony.gutierrez@amd.com 24311308Santhony.gutierrez@amd.com /** 24411308Santhony.gutierrez@amd.com * A pointer to the BurstHelper if this DRAMPacket is a split packet 24511308Santhony.gutierrez@amd.com * If not a split packet (common case), this is set to NULL 24611308Santhony.gutierrez@amd.com */ 24711308Santhony.gutierrez@amd.com BurstHelper* burstHelper; 24811308Santhony.gutierrez@amd.com Bank& bankRef; 24911308Santhony.gutierrez@amd.com 25011308Santhony.gutierrez@amd.com DRAMPacket(PacketPtr _pkt, bool is_read, uint8_t _rank, uint8_t _bank, 25111308Santhony.gutierrez@amd.com uint32_t _row, uint16_t bank_id, Addr _addr, 25211308Santhony.gutierrez@amd.com unsigned int _size, Bank& bank_ref) 25311308Santhony.gutierrez@amd.com : entryTime(curTick()), readyTime(curTick()), 25411308Santhony.gutierrez@amd.com pkt(_pkt), isRead(is_read), rank(_rank), bank(_bank), row(_row), 25511308Santhony.gutierrez@amd.com bankId(bank_id), addr(_addr), size(_size), burstHelper(NULL), 25611308Santhony.gutierrez@amd.com bankRef(bank_ref) 25711308Santhony.gutierrez@amd.com { } 25812133Sspwilson2@wisc.edu 25911308Santhony.gutierrez@amd.com }; 26011308Santhony.gutierrez@amd.com 26111308Santhony.gutierrez@amd.com /** 26211308Santhony.gutierrez@amd.com * Bunch of things requires to setup "events" in gem5 26311308Santhony.gutierrez@amd.com * When event "respondEvent" occurs for example, the method 26411308Santhony.gutierrez@amd.com * processRespondEvent is called; no parameters are allowed 26511308Santhony.gutierrez@amd.com * in these methods 26611308Santhony.gutierrez@amd.com */ 26711308Santhony.gutierrez@amd.com void processNextReqEvent(); 26811308Santhony.gutierrez@amd.com EventWrapper<DRAMCtrl,&DRAMCtrl::processNextReqEvent> nextReqEvent; 26911308Santhony.gutierrez@amd.com 27011308Santhony.gutierrez@amd.com void processRespondEvent(); 27111308Santhony.gutierrez@amd.com EventWrapper<DRAMCtrl, &DRAMCtrl::processRespondEvent> respondEvent; 27211308Santhony.gutierrez@amd.com 27311308Santhony.gutierrez@amd.com void processActivateEvent(); 27411308Santhony.gutierrez@amd.com EventWrapper<DRAMCtrl, &DRAMCtrl::processActivateEvent> activateEvent; 27511308Santhony.gutierrez@amd.com 27611308Santhony.gutierrez@amd.com void processPrechargeEvent(); 27711308Santhony.gutierrez@amd.com EventWrapper<DRAMCtrl, &DRAMCtrl::processPrechargeEvent> prechargeEvent; 27811689Santhony.gutierrez@amd.com 27911308Santhony.gutierrez@amd.com void processRefreshEvent(); 28011308Santhony.gutierrez@amd.com EventWrapper<DRAMCtrl, &DRAMCtrl::processRefreshEvent> refreshEvent; 28111308Santhony.gutierrez@amd.com 28211308Santhony.gutierrez@amd.com void processPowerEvent(); 28311308Santhony.gutierrez@amd.com EventWrapper<DRAMCtrl,&DRAMCtrl::processPowerEvent> powerEvent; 28411308Santhony.gutierrez@amd.com 28511308Santhony.gutierrez@amd.com /** 28611308Santhony.gutierrez@amd.com * Check if the read queue has room for more entries 28711308Santhony.gutierrez@amd.com * 28811308Santhony.gutierrez@amd.com * @param pktCount The number of entries needed in the read queue 28911308Santhony.gutierrez@amd.com * @return true if read queue is full, false otherwise 29011308Santhony.gutierrez@amd.com */ 29111308Santhony.gutierrez@amd.com bool readQueueFull(unsigned int pktCount) const; 29211308Santhony.gutierrez@amd.com 29311308Santhony.gutierrez@amd.com /** 29411308Santhony.gutierrez@amd.com * Check if the write queue has room for more entries 29511308Santhony.gutierrez@amd.com * 29611660Stushar@ece.gatech.edu * @param pktCount The number of entries needed in the write queue 29711308Santhony.gutierrez@amd.com * @return true if write queue is full, false otherwise 29812133Sspwilson2@wisc.edu */ 29911308Santhony.gutierrez@amd.com bool writeQueueFull(unsigned int pktCount) const; 30011308Santhony.gutierrez@amd.com 30111308Santhony.gutierrez@amd.com /** 30211308Santhony.gutierrez@amd.com * When a new read comes in, first check if the write q has a 30311308Santhony.gutierrez@amd.com * pending request to the same address.\ If not, decode the 30411308Santhony.gutierrez@amd.com * address to populate rank/bank/row, create one or mutliple 30511308Santhony.gutierrez@amd.com * "dram_pkt", and push them to the back of the read queue.\ 30611308Santhony.gutierrez@amd.com * If this is the only 30711308Santhony.gutierrez@amd.com * read request in the system, schedule an event to start 30811308Santhony.gutierrez@amd.com * servicing it. 30911308Santhony.gutierrez@amd.com * 31011308Santhony.gutierrez@amd.com * @param pkt The request packet from the outside world 31111308Santhony.gutierrez@amd.com * @param pktCount The number of DRAM bursts the pkt 31211308Santhony.gutierrez@amd.com * translate to. If pkt size is larger then one full burst, 31311308Santhony.gutierrez@amd.com * then pktCount is greater than one. 31411308Santhony.gutierrez@amd.com */ 31511308Santhony.gutierrez@amd.com void addToReadQueue(PacketPtr pkt, unsigned int pktCount); 31611308Santhony.gutierrez@amd.com 31711308Santhony.gutierrez@amd.com /** 31811308Santhony.gutierrez@amd.com * Decode the incoming pkt, create a dram_pkt and push to the 31911308Santhony.gutierrez@amd.com * back of the write queue. \If the write q length is more than 32011308Santhony.gutierrez@amd.com * the threshold specified by the user, ie the queue is beginning 32111308Santhony.gutierrez@amd.com * to get full, stop reads, and start draining writes. 32211308Santhony.gutierrez@amd.com * 32311308Santhony.gutierrez@amd.com * @param pkt The request packet from the outside world 32411308Santhony.gutierrez@amd.com * @param pktCount The number of DRAM bursts the pkt 32511308Santhony.gutierrez@amd.com * translate to. If pkt size is larger then one full burst, 32611308Santhony.gutierrez@amd.com * then pktCount is greater than one. 32711308Santhony.gutierrez@amd.com */ 32811308Santhony.gutierrez@amd.com void addToWriteQueue(PacketPtr pkt, unsigned int pktCount); 32911308Santhony.gutierrez@amd.com 33011308Santhony.gutierrez@amd.com /** 33111308Santhony.gutierrez@amd.com * Actually do the DRAM access - figure out the latency it 33211308Santhony.gutierrez@amd.com * will take to service the req based on bank state, channel state etc 33311308Santhony.gutierrez@amd.com * and then update those states to account for this request.\ Based 33411308Santhony.gutierrez@amd.com * on this, update the packet's "readyTime" and move it to the 33511308Santhony.gutierrez@amd.com * response q from where it will eventually go back to the outside 33611308Santhony.gutierrez@amd.com * world. 33711308Santhony.gutierrez@amd.com * 33811308Santhony.gutierrez@amd.com * @param pkt The DRAM packet created from the outside world pkt 33911308Santhony.gutierrez@amd.com */ 34011308Santhony.gutierrez@amd.com void doDRAMAccess(DRAMPacket* dram_pkt); 34111308Santhony.gutierrez@amd.com 34211308Santhony.gutierrez@amd.com /** 34311308Santhony.gutierrez@amd.com * When a packet reaches its "readyTime" in the response Q, 34411900Sandreas.sandberg@arm.com * use the "access()" method in AbstractMemory to actually 34511900Sandreas.sandberg@arm.com * create the response packet, and send it back to the outside 34611900Sandreas.sandberg@arm.com * world requestor. 34711900Sandreas.sandberg@arm.com * 34811900Sandreas.sandberg@arm.com * @param pkt The packet from the outside world 34911308Santhony.gutierrez@amd.com * @param static_latency Static latency to add before sending the packet 35011308Santhony.gutierrez@amd.com */ 35111308Santhony.gutierrez@amd.com void accessAndRespond(PacketPtr pkt, Tick static_latency); 35211308Santhony.gutierrez@amd.com 35311308Santhony.gutierrez@amd.com /** 35411308Santhony.gutierrez@amd.com * Address decoder to figure out physical mapping onto ranks, 35511308Santhony.gutierrez@amd.com * banks, and rows. This function is called multiple times on the same 35611308Santhony.gutierrez@amd.com * system packet if the pakcet is larger than burst of the memory. The 35711308Santhony.gutierrez@amd.com * dramPktAddr is used for the offset within the packet. 35811308Santhony.gutierrez@amd.com * 35911308Santhony.gutierrez@amd.com * @param pkt The packet from the outside world 360 * @param dramPktAddr The starting address of the DRAM packet 361 * @param size The size of the DRAM packet in bytes 362 * @param isRead Is the request for a read or a write to DRAM 363 * @return A DRAMPacket pointer with the decoded information 364 */ 365 DRAMPacket* decodeAddr(PacketPtr pkt, Addr dramPktAddr, unsigned int size, 366 bool isRead); 367 368 /** 369 * The memory schduler/arbiter - picks which request needs to 370 * go next, based on the specified policy such as FCFS or FR-FCFS 371 * and moves it to the head of the queue. 372 * Prioritizes accesses to the same rank as previous burst unless 373 * controller is switching command type. 374 * 375 * @param queue Queued requests to consider 376 * @param switched_cmd_type Command type is changing 377 */ 378 void chooseNext(std::deque<DRAMPacket*>& queue, bool switched_cmd_type); 379 380 /** 381 * For FR-FCFS policy reorder the read/write queue depending on row buffer 382 * hits and earliest banks available in DRAM 383 * Prioritizes accesses to the same rank as previous burst unless 384 * controller is switching command type. 385 * 386 * @param queue Queued requests to consider 387 * @param switched_cmd_type Command type is changing 388 */ 389 void reorderQueue(std::deque<DRAMPacket*>& queue, bool switched_cmd_type); 390 391 /** 392 * Find which are the earliest banks ready to issue an activate 393 * for the enqueued requests. Assumes maximum of 64 banks per DIMM 394 * Also checks if the bank is already prepped. 395 * 396 * @param queue Queued requests to consider 397 * @param switched_cmd_type Command type is changing 398 * @return One-hot encoded mask of bank indices 399 */ 400 uint64_t minBankPrep(const std::deque<DRAMPacket*>& queue, 401 bool switched_cmd_type) const; 402 403 /** 404 * Keep track of when row activations happen, in order to enforce 405 * the maximum number of activations in the activation window. The 406 * method updates the time that the banks become available based 407 * on the current limits. 408 * 409 * @param bank Reference to the bank 410 * @param act_tick Time when the activation takes place 411 * @param row Index of the row 412 */ 413 void activateBank(Bank& bank, Tick act_tick, uint32_t row); 414 415 /** 416 * Precharge a given bank and also update when the precharge is 417 * done. This will also deal with any stats related to the 418 * accesses to the open page. 419 * 420 * @param bank_ref The bank to precharge 421 * @param pre_at Time when the precharge takes place 422 * @param trace Is this an auto precharge then do not add to trace 423 */ 424 void prechargeBank(Bank& bank_ref, Tick pre_at, bool trace = true); 425 426 /** 427 * Used for debugging to observe the contents of the queues. 428 */ 429 void printQs() const; 430 431 /** 432 * The controller's main read and write queues 433 */ 434 std::deque<DRAMPacket*> readQueue; 435 std::deque<DRAMPacket*> writeQueue; 436 437 /** 438 * Response queue where read packets wait after we're done working 439 * with them, but it's not time to send the response yet. The 440 * responses are stored seperately mostly to keep the code clean 441 * and help with events scheduling. For all logical purposes such 442 * as sizing the read queue, this and the main read queue need to 443 * be added together. 444 */ 445 std::deque<DRAMPacket*> respQueue; 446 447 /** 448 * If we need to drain, keep the drain manager around until we're 449 * done here. 450 */ 451 DrainManager *drainManager; 452 453 /** 454 * Multi-dimensional vector of banks, first dimension is ranks, 455 * second is bank 456 */ 457 std::vector<std::vector<Bank> > banks; 458 459 /** 460 * The following are basic design parameters of the memory 461 * controller, and are initialized based on parameter values. 462 * The rowsPerBank is determined based on the capacity, number of 463 * ranks and banks, the burst size, and the row buffer size. 464 */ 465 const uint32_t deviceBusWidth; 466 const uint32_t burstLength; 467 const uint32_t deviceRowBufferSize; 468 const uint32_t devicesPerRank; 469 const uint32_t burstSize; 470 const uint32_t rowBufferSize; 471 const uint32_t columnsPerRowBuffer; 472 const uint32_t columnsPerStripe; 473 const uint32_t ranksPerChannel; 474 const uint32_t bankGroupsPerRank; 475 const bool bankGroupArch; 476 const uint32_t banksPerRank; 477 const uint32_t channels; 478 uint32_t rowsPerBank; 479 const uint32_t readBufferSize; 480 const uint32_t writeBufferSize; 481 const uint32_t writeHighThreshold; 482 const uint32_t writeLowThreshold; 483 const uint32_t minWritesPerSwitch; 484 uint32_t writesThisTime; 485 uint32_t readsThisTime; 486 487 /** 488 * Basic memory timing parameters initialized based on parameter 489 * values. 490 */ 491 const Tick M5_CLASS_VAR_USED tCK; 492 const Tick tWTR; 493 const Tick tRTW; 494 const Tick tCS; 495 const Tick tBURST; 496 const Tick tCCD_L; 497 const Tick tRCD; 498 const Tick tCL; 499 const Tick tRP; 500 const Tick tRAS; 501 const Tick tWR; 502 const Tick tRTP; 503 const Tick tRFC; 504 const Tick tREFI; 505 const Tick tRRD; 506 const Tick tRRD_L; 507 const Tick tXAW; 508 const uint32_t activationLimit; 509 510 /** 511 * Memory controller configuration initialized based on parameter 512 * values. 513 */ 514 Enums::MemSched memSchedPolicy; 515 Enums::AddrMap addrMapping; 516 Enums::PageManage pageMgmt; 517 518 /** 519 * Max column accesses (read and write) per row, before forefully 520 * closing it. 521 */ 522 const uint32_t maxAccessesPerRow; 523 524 /** 525 * Pipeline latency of the controller frontend. The frontend 526 * contribution is added to writes (that complete when they are in 527 * the write buffer) and reads that are serviced the write buffer. 528 */ 529 const Tick frontendLatency; 530 531 /** 532 * Pipeline latency of the backend and PHY. Along with the 533 * frontend contribution, this latency is added to reads serviced 534 * by the DRAM. 535 */ 536 const Tick backendLatency; 537 538 /** 539 * Till when has the main data bus been spoken for already? 540 */ 541 Tick busBusyUntil; 542 543 /** 544 * Keep track of when a refresh is due. 545 */ 546 Tick refreshDueAt; 547 548 /** 549 * The refresh state is used to control the progress of the 550 * refresh scheduling. When normal operation is in progress the 551 * refresh state is idle. From there, it progresses to the refresh 552 * drain state once tREFI has passed. The refresh drain state 553 * captures the DRAM row active state, as it will stay there until 554 * all ongoing accesses complete. Thereafter all banks are 555 * precharged, and lastly, the DRAM is refreshed. 556 */ 557 enum RefreshState { 558 REF_IDLE = 0, 559 REF_DRAIN, 560 REF_PRE, 561 REF_RUN 562 }; 563 564 RefreshState refreshState; 565 566 /** 567 * The power state captures the different operational states of 568 * the DRAM and interacts with the bus read/write state machine, 569 * and the refresh state machine. In the idle state all banks are 570 * precharged. From there we either go to an auto refresh (as 571 * determined by the refresh state machine), or to a precharge 572 * power down mode. From idle the memory can also go to the active 573 * state (with one or more banks active), and in turn from there 574 * to active power down. At the moment we do not capture the deep 575 * power down and self-refresh state. 576 */ 577 enum PowerState { 578 PWR_IDLE = 0, 579 PWR_REF, 580 PWR_PRE_PDN, 581 PWR_ACT, 582 PWR_ACT_PDN 583 }; 584 585 /** 586 * Since we are taking decisions out of order, we need to keep 587 * track of what power transition is happening at what time, such 588 * that we can go back in time and change history. For example, if 589 * we precharge all banks and schedule going to the idle state, we 590 * might at a later point decide to activate a bank before the 591 * transition to idle would have taken place. 592 */ 593 PowerState pwrStateTrans; 594 595 /** 596 * Current power state. 597 */ 598 PowerState pwrState; 599 600 /** 601 * Schedule a power state transition in the future, and 602 * potentially override an already scheduled transition. 603 * 604 * @param pwr_state Power state to transition to 605 * @param tick Tick when transition should take place 606 */ 607 void schedulePowerEvent(PowerState pwr_state, Tick tick); 608 609 Tick prevArrival; 610 611 /** 612 * The soonest you have to start thinking about the next request 613 * is the longest access time that can occur before 614 * busBusyUntil. Assuming you need to precharge, open a new row, 615 * and access, it is tRP + tRCD + tCL. 616 */ 617 Tick nextReqTime; 618 619 // All statistics that the model needs to capture 620 Stats::Scalar readReqs; 621 Stats::Scalar writeReqs; 622 Stats::Scalar readBursts; 623 Stats::Scalar writeBursts; 624 Stats::Scalar bytesReadDRAM; 625 Stats::Scalar bytesReadWrQ; 626 Stats::Scalar bytesWritten; 627 Stats::Scalar bytesReadSys; 628 Stats::Scalar bytesWrittenSys; 629 Stats::Scalar servicedByWrQ; 630 Stats::Scalar mergedWrBursts; 631 Stats::Scalar neitherReadNorWrite; 632 Stats::Vector perBankRdBursts; 633 Stats::Vector perBankWrBursts; 634 Stats::Scalar numRdRetry; 635 Stats::Scalar numWrRetry; 636 Stats::Scalar totGap; 637 Stats::Vector readPktSize; 638 Stats::Vector writePktSize; 639 Stats::Vector rdQLenPdf; 640 Stats::Vector wrQLenPdf; 641 Stats::Histogram bytesPerActivate; 642 Stats::Histogram rdPerTurnAround; 643 Stats::Histogram wrPerTurnAround; 644 645 // Latencies summed over all requests 646 Stats::Scalar totQLat; 647 Stats::Scalar totMemAccLat; 648 Stats::Scalar totBusLat; 649 650 // Average latencies per request 651 Stats::Formula avgQLat; 652 Stats::Formula avgBusLat; 653 Stats::Formula avgMemAccLat; 654 655 // Average bandwidth 656 Stats::Formula avgRdBW; 657 Stats::Formula avgWrBW; 658 Stats::Formula avgRdBWSys; 659 Stats::Formula avgWrBWSys; 660 Stats::Formula peakBW; 661 Stats::Formula busUtil; 662 Stats::Formula busUtilRead; 663 Stats::Formula busUtilWrite; 664 665 // Average queue lengths 666 Stats::Average avgRdQLen; 667 Stats::Average avgWrQLen; 668 669 // Row hit count and rate 670 Stats::Scalar readRowHits; 671 Stats::Scalar writeRowHits; 672 Stats::Formula readRowHitRate; 673 Stats::Formula writeRowHitRate; 674 Stats::Formula avgGap; 675 676 // DRAM Power Calculation 677 Stats::Formula pageHitRate; 678 Stats::Vector pwrStateTime; 679 680 // Track when we transitioned to the current power state 681 Tick pwrStateTick; 682 683 // To track number of banks which are currently active 684 unsigned int numBanksActive; 685 686 // Holds the value of the rank of burst issued 687 uint8_t activeRank; 688 689 /** @todo this is a temporary workaround until the 4-phase code is 690 * committed. upstream caches needs this packet until true is returned, so 691 * hold onto it for deletion until a subsequent call 692 */ 693 std::vector<PacketPtr> pendingDelete; 694 695 public: 696 697 void regStats(); 698 699 DRAMCtrl(const DRAMCtrlParams* p); 700 701 unsigned int drain(DrainManager* dm); 702 703 virtual BaseSlavePort& getSlavePort(const std::string& if_name, 704 PortID idx = InvalidPortID); 705 706 virtual void init(); 707 virtual void startup(); 708 709 protected: 710 711 Tick recvAtomic(PacketPtr pkt); 712 void recvFunctional(PacketPtr pkt); 713 bool recvTimingReq(PacketPtr pkt); 714 715}; 716 717#endif //__MEM_DRAM_CTRL_HH__ 718