首页 > 代码库 > leveldb - 并发写入处理

leveldb - 并发写入处理

在并发写入的时候,leveldb巧妙地利用一个时间窗口做batch写入,这部分代码值得一读:

Status DBImpl::Write(const WriteOptions& options, WriteBatch* my_batch) {  // A begin   Writer w(&mutex_);  w.batch = my_batch;  w.sync = options.sync;  w.done = false;  // A end  // B begin   MutexLock l(&mutex_);  writers_.push_back(&w);  while (!w.done && &w != writers_.front()) {    w.cv.Wait();  }  if (w.done) {    return w.status;  }  // B end  // May temporarily unlock and wait.  Status status = MakeRoomForWrite(my_batch == NULL);  uint64_t last_sequence = versions_->LastSequence();  Writer* last_writer = &w;  if (status.ok() && my_batch != NULL) {  // NULL batch is for compactions    WriteBatch* updates = BuildBatchGroup(&last_writer);    WriteBatchInternal::SetSequence(updates, last_sequence + 1);    last_sequence += WriteBatchInternal::Count(updates);    // Add to log and apply to memtable.  We can release the lock    // during this phase since &w is currently responsible for logging    // and protects against concurrent loggers and concurrent writes    // into mem_.    {      mutex_.Unlock();      status = log_->AddRecord(WriteBatchInternal::Contents(updates));      bool sync_error = false;      if (status.ok() && options.sync) {        status = logfile_->Sync();        if (!status.ok()) {          sync_error = true;        }      }      if (status.ok()) {        status = WriteBatchInternal::InsertInto(updates, mem_);      }      mutex_.Lock();      if (sync_error) {        // The state of the log file is indeterminate: the log record we        // just added may or may not show up when the DB is re-opened.        // So we force the DB into a mode where all future writes fail.        RecordBackgroundError(status);      }    }    if (updates == tmp_batch_) tmp_batch_->Clear();    versions_->SetLastSequence(last_sequence);  }  while (true) {    Writer* ready = writers_.front();    writers_.pop_front();    if (ready != &w) {      ready->status = status;      ready->done = true;      ready->cv.Signal();    }    if (ready == last_writer) break;  }  // Notify new head of write queue  if (!writers_.empty()) {    writers_.front()->cv.Signal();  }  return status;}

     假设同时有w1, w2, w3, w4, w5, w6 并发请求写入。

  B部分代码让竞争到mutex资源的w1获取了锁。w1将它要写的数据添加到了writers_队列里去,此时队列只有一个w1, 从而其顺利的进行buildbatchgroup。当运行到34行时mutex_互斥锁释放,之所以这儿可以释放mutex_,是因为其它的写操作都不满足队首条件,进而不会进入log和memtable写入阶段。这时(w2, w3, w4, w5, w6)会竞争锁,由于B段代码中不满足队首条件,均等待并释放锁了。从而队列可能会如(w3, w5, w2, w4).

  继而w1进行log写入和memtable写入。 当w1完成log和memtable写入后,进入46行代码,则mutex_又锁住,这时B段代码中队列因为获取不到锁则队列不会修改。

  随后59行开始,w1被pop出来,由于reader==w, 并且ready==last_writer,所以直接到71行代码,唤醒了此时处于队首的w3.

      w3唤醒时,发现自己是队首,可以顺利的进行进入buildbatchgroup,在该函数中,遍历了目前所有的队列元素,形成一个update的batch,即将w3, w5, w2, w4合并为一个batch. 并将last_writer置为此时处于队尾的最后一个元素w4,34行代码运行后,因为释放了锁资源,队列可能随着dbimpl::write的调用而更改,如队列状况可能为(w3, w5, w2, w4, w6, w9, w8).

   35-45行的代码将w3, w5, w2, w4整个的batch写入log和memtable. 到65行,分别对w5, w2, w4进行了一次cond signal.当判断到完w4 == lastwriter时,则退出循环。72行则对队首的w6唤醒,从而按上述步骤依次进行下去。

  这样就形成了多个并发write 合并为一个batch写入log和memtable的机制。

  

leveldb - 并发写入处理