2010-02-02 19 views
3

我需要一個機制來實現以下情形:C#多線程併發機制,以等待一個給定的結果

  1. 兩個或多個線程需要在同一時間加載一組給定值的
  2. 只有一個請求必須按值完成,因此如果兩個線程需要加載相同的子集,則必須等待另一個子集,因爲我們不希望每個值都具有鎖定(或互斥鎖或其他基元),因爲它們可以可能過高。

的情形是(假設線程B進入早一點)

  thread A   thread B 
values 5, 8, 9, 12  7, 8, 9, 13, 14 
request 5,  12  7, 8, 9, 13, 14 
waits for 8, 9 
          >>data loaded<< 
retrieves 8, 9 

      >> data loaded << 
returns 5, 8, 9, 12 

我應該使用哪種併發機制呢?

記住生產者/消費者不會工作,因爲線程A和B不完全是消費者(他們只對某些數據感興趣)。

謝謝

+0

是不可變的值嗎? 加載一個值需要多長時間? – MaLio 2010-02-02 20:03:50

+0

你是說你有一個數據流,因此該流中的每個項目都需要由兩個線程處理?延遲是一個大問題? – Keith 2010-02-02 22:01:27

+0

每個請求都會有自己的號碼,大部分時間都會被緩存,然後不會被請求。請求很貴,這就是我們只需要執行一次的原因。 – pablo 2010-02-02 22:46:20

回答

1

這聽起來很像鎖管理器,所以爲什麼不建立一個?

class LockManager<TKey> 
{ 
    private Dictionary<TKey, List<EventWaitHandle>> locks = 
     new Dictionary<TKey, List<EventWaitHandle>>(); 
    private Object syncRoot = new Object(); 

    public void Lock(TKey key) 
    { 
     do 
     { 
      Monitor.Enter(syncRoot); 
      List<EventWaitHandle> waiters = null; 
      if (true == locks.TryGetValue(key, out waiters)) 
      { 
       // Key is locked, add ourself to waiting list 
       // Not that this code is not safe under OOM conditions 
       AutoResetEvent eventLockFree = new AutoResetEvent(false); 
       waiters.Add(eventLockFree); 
       Monitor.Exit(syncRoot); 
       // Now wait for a notification 
       eventLockFree.WaitOne(); 
      } 
      else 
      { 
       // event is free 
       waiters = new List<EventWaitHandle>(); 
       locks.Add(key, waiters); 
       Monitor.Exit(syncRoot); 
       // we're done 
       break; 
      } 
     } while (true); 

    } 

    public void Release(TKey key) 
    { 
     List<EventWaitHandle> waiters = null; 
     lock (syncRoot) 
     { 
      if (false == locks.TryGetValue(key, out waiters)) 
      { 
       Debug.Assert(false, "Releasing a bad lock!"); 
      } 
      locks.Remove(key); 
     } 
     // Notify ALL waiters. Unfair notifications 
     // are better than FIFO for reasons of lock convoys 
     foreach (EventWaitHandle waitHandle in waiters) 
     { 
      waitHandle.Set(); 
     } 
    } 
} 

您使用它之前,您必須鎖定每個值:

class Program 
{ 
    class ThreadData 
    { 
     public LockManager<int> LockManager { get; set; } 
     public int[] Work { get; set; } 
     public AutoResetEvent Done { get; set; } 
    } 

    static void Main(string[] args) 
    { 
     int[] forA = new int[] {5, 8, 9, 12}; 
     int[] forB = new int[] {7, 8, 9, 13, 14 }; 

     LockManager<int> lockManager = new LockManager<int>(); 

     ThreadData tdA = new ThreadData 
     { 
      LockManager = lockManager, 
      Work = forA, 
      Done = new AutoResetEvent(false), 
     }; 
     ThreadData tdB = new ThreadData 
     { 
      LockManager = lockManager, 
      Work = forB, 
      Done = new AutoResetEvent(false), 
     }; 

     ThreadPool.QueueUserWorkItem(new WaitCallback(Worker), tdA); 
     ThreadPool.QueueUserWorkItem(new WaitCallback(Worker), tdB); 

     WaitHandle.WaitAll(new WaitHandle[] { tdA.Done, tdB.Done }); 
    } 

    static void Worker(object args) 
    { 
     Debug.Assert(args is ThreadData); 
     ThreadData data = (ThreadData) args; 
     try 
     { 
      foreach (int key in data.Work) 
      { 
       data.LockManager.Lock(key); 
       Console.WriteLine("~{0}: {1}", 
        Thread.CurrentThread.ManagedThreadId, key); 
       // simulate the load the set for Key 
       Thread.Sleep(1000); 
      } 
      foreach (int key in data.Work) 
      { 
       // Now free they locked keys 
       data.LockManager.Release(key); 
      } 
     } 
     catch (Exception e) 
     { 
      Debug.Write(e); 
     } 
     finally 
     { 
      data.Done.Set(); 
     } 
    } 
} 

你要面對的最大問題將是死鎖。將兩個數組更改爲{5,8,9,7}和{7,8,9,5},您將立即看到我的觀點。

+0

除非我錯過了創建鎖列表的每一項,而這正是我想避免的,因爲這會對系統造成過大的損失,對吧? – pablo 2010-02-02 23:22:21

+0

每個*服務員*鎖定*項目的一個鎖。服務員(=一個線程)一次只能等待一個項目,所以最糟糕的情況是線程的事件數量多得多(精確匹配意味着你死鎖了,但是死鎖是你需求中固有的,並且可以onyl通過適當地分發集合來避免)。 – 2010-02-02 23:42:46

+0

那麼,爲了避免死鎖,政策將是:你去提供那些不在等待名單上的人,然後等待你失蹤的人,這樣就可以避免死鎖,對嗎? – pablo 2010-02-03 07:11:06