GRPC/C++ - 如何检测异步服务器中客户端断开连接

2024-05-18

我正在使用这个代码example https://github.com/grpc/grpc/blob/v1.32.0/examples/cpp/helloworld/greeter_async_server.cc创建我的 GRPC 异步服务器:

#include <memory>
#include <iostream>
#include <string>
#include <thread>

#include <grpcpp/grpcpp.h>
#include <grpc/support/log.h>

#ifdef BAZEL_BUILD
#include "examples/protos/helloworld.grpc.pb.h"
#else
#include "helloworld.grpc.pb.h"
#endif

using grpc::Server;
using grpc::ServerAsyncResponseWriter;
using grpc::ServerBuilder;
using grpc::ServerContext;
using grpc::ServerCompletionQueue;
using grpc::Status;
using helloworld::HelloRequest;
using helloworld::HelloReply;
using helloworld::Greeter;

class ServerImpl final {
 public:
  ~ServerImpl() {
    server_->Shutdown();
    // Always shutdown the completion queue after the server.
    cq_->Shutdown();
  }

  // There is no shutdown handling in this code.
  void Run() {
    std::string server_address("0.0.0.0:50051");

    ServerBuilder builder;
    // Listen on the given address without any authentication mechanism.
    builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
    // Register "service_" as the instance through which we'll communicate with
    // clients. In this case it corresponds to an *asynchronous* service.

    //LINES ADDED BY ME TO IMPLEMENT KEEPALIVE 
    builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
    builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 2000);
    builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 3000);
    builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
    //END OF LINES ADDED BY ME

    builder.RegisterService(&service_);
    // Get hold of the completion queue used for the asynchronous communication
    // with the gRPC runtime.
    cq_ = builder.AddCompletionQueue();
    // Finally assemble the server.
    server_ = builder.BuildAndStart();
    std::cout << "Server listening on " << server_address << std::endl;

    // Proceed to the server's main loop.
    HandleRpcs();
  }

 private:
  // Class encompasing the state and logic needed to serve a request.
  class CallData {
   public:
    // Take in the "service" instance (in this case representing an asynchronous
    // server) and the completion queue "cq" used for asynchronous communication
    // with the gRPC runtime.
    CallData(Greeter::AsyncService* service, ServerCompletionQueue* cq)
        : service_(service), cq_(cq), responder_(&ctx_), status_(CREATE) {
      // Invoke the serving logic right away.
      Proceed();
    }

    void Proceed() {
      if (status_ == CREATE) {
        // Make this instance progress to the PROCESS state.
        status_ = PROCESS;

        // As part of the initial CREATE state, we *request* that the system
        // start processing SayHello requests. In this request, "this" acts are
        // the tag uniquely identifying the request (so that different CallData
        // instances can serve different requests concurrently), in this case
        // the memory address of this CallData instance.
        service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
                                  this);
      } else if (status_ == PROCESS) {
        // Spawn a new CallData instance to serve new clients while we process
        // the one for this CallData. The instance will deallocate itself as
        // part of its FINISH state.
        new CallData(service_, cq_);

        // The actual processing.
        std::string prefix("Hello ");
        reply_.set_message(prefix + request_.name());

        // And we are done! Let the gRPC runtime know we've finished, using the
        // memory address of this instance as the uniquely identifying tag for
        // the event.
        status_ = FINISH;
        responder_.Finish(reply_, Status::OK, this);
      } else {
        GPR_ASSERT(status_ == FINISH);
        // Once in the FINISH state, deallocate ourselves (CallData).
        delete this;
      }
    }

   private:
    // The means of communication with the gRPC runtime for an asynchronous
    // server.
    Greeter::AsyncService* service_;
    // The producer-consumer queue where for asynchronous server notifications.
    ServerCompletionQueue* cq_;
    // Context for the rpc, allowing to tweak aspects of it such as the use
    // of compression, authentication, as well as to send metadata back to the
    // client.
    ServerContext ctx_;

    // What we get from the client.
    HelloRequest request_;
    // What we send back to the client.
    HelloReply reply_;

    // The means to get back to the client.
    ServerAsyncResponseWriter<HelloReply> responder_;

    // Let's implement a tiny state machine with the following states.
    enum CallStatus { CREATE, PROCESS, FINISH };
    CallStatus status_;  // The current serving state.
  };

  // This can be run in multiple threads if needed.
  void HandleRpcs() {
    // Spawn a new CallData instance to serve new clients.
    new CallData(&service_, cq_.get());
    void* tag;  // uniquely identifies a request.
    bool ok;
    while (true) {
      // Block waiting to read the next event from the completion queue. The
      // event is uniquely identified by its tag, which in this case is the
      // memory address of a CallData instance.
      // The return value of Next should always be checked. This return value
      // tells us whether there is any kind of event or cq_ is shutting down.
      GPR_ASSERT(cq_->Next(&tag, &ok));
      GPR_ASSERT(ok);
      static_cast<CallData*>(tag)->Proceed();
    }
  }

  std::unique_ptr<ServerCompletionQueue> cq_;
  Greeter::AsyncService service_;
  std::unique_ptr<Server> server_;
};

int main(int argc, char** argv) {
  ServerImpl server;
  server.Run();

  return 0;
}

因为我做了一项研究,发现我必须实施 KeepAlive (https://grpc.github.io/grpc/cpp/md_doc_keepalive.html https://grpc.github.io/grpc/cpp/md_doc_keepalive.html)我添加了这些行:

builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 2000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 3000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);

到目前为止一切顺利,服务器工作并且通信流畅。但是我如何检测客户端已断开连接?我添加的行是为了所谓的KeepAlive方法似乎不适合我。

我的错误在哪里?当客户端因任何原因断开连接时,如何在异步服务器上检测到?


让我先介绍一些背景信息。

关于 gRPC 需要了解的一件重要事情是,它使用 HTTP/2 在单个 TCP 连接上复用许多流。每个 gRPC 调用都是一个单独的流,无论调用是一元的还是流式的。一般来说,任何 gRPC 调用都可以有零个或多个从双方发送的消息;一元调用只是一种特殊情况,它只有一条从客户端到服务器的消息,后面紧接着一条从服务器到客户端的消息。

我们通常使用“断开连接”一词来指 TCP 连接断开,而不是单个流终止,尽管有时人们以相反的含义使用该词。我不确定你指的是哪一个,所以我会回答两个。

gRPC API 向应用程序公开流生命周期,但不公开 TCP 连接生命周期。目的是该库处理管理 TCP 连接的所有细节并向应用程序隐藏它们——我们实际上并没有公开一种方法来判断连接何时断开,并且您不需要关心,因为图书馆将自动为您重新连接。 :) 应用程序可见的唯一情况是,如果单个 TCP 连接失败时已有流正在运行,那么这些流将失败。

正如我所说,图书馆does向应用程序公开各个流的生命周期;流的生命周期基本上就是CallData上面代码中的对象。有两种方法可以确定流是否已终止。一种是显式调用ServerContext::IsCancelled() https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a8cddeac523cbcfb67113bfd39b70c148。另一种是在CQ上请求一个事件,通过以下方式异步通知应用程序取消:ServerContext::AsyncNotifyWhenDone() https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2.

请注意,一般来说,像上面的 HelloWorld 这样的一元示例实际上并不需要担心检测流取消,因为从服务器的角度来看,整个流实际上不会持续很长时间。它通常在流式调用的情况下更有用。但也有一些例外,例如,如果您有一个一元调用,在发送响应之前必须执行大量昂贵的异步工作。

我希望这些信息有帮助。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

GRPC/C++ - 如何检测异步服务器中客户端断开连接 的相关文章

随机推荐