C#基于UDP實現的P2P語音聊天工具

jopen 10年前發布 | 23K 次閱讀 P2P .NET開發

這篇文章主要是一個應用,使用udp傳送語音和文本等信息。在這個系統中沒有服務端和客戶端,相互通訊都是直接相互聯系的。能夠很好的實現效果。

語音獲取

要想發送語音信息,首先得獲取語音,這里有幾種方法,一種是使用DirectX的DirectXsound來錄音,我為了簡便使用一個開源的插件NAudio來實現語音錄取。 在項目中引用NAudio.dll

//------------------錄音相關-----------------------------
        private IWaveIn waveIn;
        private WaveFileWriter writer;

    private void LoadWasapiDevicesCombo()
    {
        var deviceEnum = new MMDeviceEnumerator();
        var devices = deviceEnum.EnumerateAudioEndPoints(DataFlow.Capture, DeviceState.Active).ToList();
        comboBox1.DataSource = devices;
        comboBox1.DisplayMember = "FriendlyName";
    }
    private void CreateWaveInDevice()
    {

        waveIn = new WaveIn();
        waveIn.WaveFormat = new WaveFormat(8000, 1);
        waveIn.DataAvailable += OnDataAvailable;
        waveIn.RecordingStopped += OnRecordingStopped;
    }
    void OnDataAvailable(object sender, WaveInEventArgs e)
    {
        if (this.InvokeRequired)
        {
            this.BeginInvoke(new EventHandler<WaveInEventArgs>(OnDataAvailable), sender, e);
        }
        else
        {
            writer.Write(e.Buffer, 0, e.BytesRecorded);
            int secondsRecorded = (int)(writer.Length / writer.WaveFormat.AverageBytesPerSecond);
            if (secondsRecorded >= 10)//最大10s
            {
                StopRecord();
            }
            else
            {
                l_sound.Text = secondsRecorded + " s";
            }
        }
    }
    void OnRecordingStopped(object sender, StoppedEventArgs e)
    {
        if (InvokeRequired)
        {
            BeginInvoke(new EventHandler<StoppedEventArgs>(OnRecordingStopped), sender, e);
        }
        else
        {
            FinalizeWaveFile();
        }
    }
    void StopRecord()
    {
        AllChangeBtn(btn_luyin, true);
        AllChangeBtn(btn_stop, false);
        AllChangeBtn(btn_sendsound, true);
        AllChangeBtn(btn_play, true);

        //btn_luyin.Enabled = true;
        //btn_stop.Enabled = false;
        //btn_sendsound.Enabled = true;
        //btn_play.Enabled = true;
        if (waveIn != null)
            waveIn.StopRecording();
        //Cleanup();
    }
    private void Cleanup()
    {
        if (waveIn != null)
        {
            waveIn.Dispose();
            waveIn = null;
        }
        FinalizeWaveFile();
    }
    private void FinalizeWaveFile()
    {
        if (writer != null)
        {
            writer.Dispose();
            writer = null;
        }
    }
     //開始錄音
    private void btn_luyin_Click(object sender, EventArgs e)
    {
        btn_stop.Enabled = true;
        btn_luyin.Enabled = false;
        if (waveIn == null)
        {
            CreateWaveInDevice();
        }
        if (File.Exists(soundfile))
        {
            File.Delete(soundfile);
        }

        writer = new WaveFileWriter(soundfile, waveIn.WaveFormat);
        waveIn.StartRecording();
    }</pre> <p>上面的代碼實現了錄音,并且寫入文件p2psound_A.wav</p>

語音發送

獲取到語音后我們要把語音發送出去

當我們錄好音后點擊發送,這部分相關代碼是

 MsgTranslator tran = null;
 public Form1()
        {
            InitializeComponent();
            LoadWasapiDevicesCombo();//顯示音頻設備

        Config cfg = SeiClient.GetDefaultConfig();
        cfg.Port = 7777;
        UDPThread udp = new UDPThread(cfg);
        tran = new MsgTranslator(udp, cfg);
        tran.MessageReceived += tran_MessageReceived;
        tran.Debuged += new EventHandler<DebugEventArgs>(tran_Debuged);
    }
    private void btn_sendsound_Click(object sender, EventArgs e)
    {
        if (t_ip.Text == "")
        {
            MessageBox.Show("請輸入ip");
            return;
        }
        if (t_port.Text == "")
        {
            MessageBox.Show("請輸入端口號");
            return;
        }
        string ip = t_ip.Text;
        int port = int.Parse(t_port.Text);
        string nick = t_nick.Text;
        string msg = "語音消息";

        IPEndPoint remote = new IPEndPoint(IPAddress.Parse(ip), port);
        Msg m = new Msg(remote, "zz", nick, Commands.SendMsg, msg, "Come From A");
        m.IsRequireReceive = true;
        m.ExtendMessageBytes = FileContent(soundfile);
        m.PackageNo = Msg.GetRandomNumber();
        m.Type = Consts.MESSAGE_BINARY;
        tran.Send(m);
    }
    private byte[] FileContent(string fileName)
    {
        FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read);
        try
        {
            byte[] buffur = new byte[fs.Length];
            fs.Read(buffur, 0, (int)fs.Length);

            return buffur;
        }
        catch (Exception ex)
        {
            return null;
        }
        finally
        {
            if (fs != null)
            {

                //關閉資源
                fs.Close();
            }
        }
    }</pre> <p>如此一來我們就把產生的語音文件發送出去了</p>

語音的接收與播放

其實語音的接收和文本消息的接收沒有什么不同,只不過語音發送的時候是以二進制發送的,因此我們在收到語音后 就應該寫入到一個文件里面去,接收完成后,播放這段語音就行了。

下面這段代碼主要是把收到的數據保存到文件中去,這個函數式我的NetFrame里收到消息時所觸發的事件,在文章前面提過的那篇文章里

void tran_MessageReceived(object sender, MessageEventArgs e)
        {
            Msg msg = e.msg;

        if (msg.Type == Consts.MESSAGE_BINARY)
        {
            string m = msg.Type + "->" + msg.UserName + "發來二進制消息!";
            AddServerMessage(m);
            if (File.Exists(recive_soundfile))
            {
                File.Delete(recive_soundfile);
            }
            FileStream fs = new FileStream(recive_soundfile, FileMode.Create, FileAccess.Write);
            fs.Write(msg.ExtendMessageBytes, 0, msg.ExtendMessageBytes.Length);
            fs.Close();
            //play_sound(recive_soundfile);
            ChangeBtn(true);

        }
        else
        {
            string m = msg.Type + "->" + msg.UserName + "說:" + msg.NormalMsg;
            AddServerMessage(m);
        }
    }</pre> <p>收到語音消息后,我們要進行播放,播放時仍然用剛才那個插件播放</p>

//--------播放部分----------
        private IWavePlayer wavePlayer;
        private WaveStream reader;

    public void play_sound(string filename)
    {
        if (wavePlayer != null)
        {
            wavePlayer.Dispose();
            wavePlayer = null;
        }
        if (reader != null)
        {
            reader.Dispose();
        }
        reader = new MediaFoundationReader(filename, new MediaFoundationReader.MediaFoundationReaderSettings() { SingleReaderObject = true });

        if (wavePlayer == null)
        {

            wavePlayer = new WaveOut();
            wavePlayer.PlaybackStopped += WavePlayerOnPlaybackStopped;
            wavePlayer.Init(reader);
        }
        wavePlayer.Play();
    }
    private void WavePlayerOnPlaybackStopped(object sender, StoppedEventArgs stoppedEventArgs)
    {
        if (stoppedEventArgs.Exception != null)
        {
            MessageBox.Show(stoppedEventArgs.Exception.Message);
        }
        if (wavePlayer != null)
        {
            wavePlayer.Stop();
        }
        btn_luyin.Enabled = true;
    }private void btn_play_Click(object sender, EventArgs e)
    {
        btn_luyin.Enabled = false;
        play_sound(soundfile);
    }</pre> <p><img alt="" src="http://img.blog.csdn.net/20141016001152593?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvemh1anVueHh4eHg=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast" width="702" height="258" /></p>

在上面演示了接收和發送一段語音消息的界面

技術總結

主要用到的技術就是UDP和NAudio的錄音和播放功能

其中用到的UDP傳輸類我放在了github上面 地址在我的博客左邊的個人介紹里有地址  項目地址 https://github.com/zhujunxxxxx/ZZNetFrame

希望這篇文章能夠提供一個思路。

原文來自:小竹zz

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!