Pipelines est né du travail effectué par l’équipe .NET Core pour faciliter les E/S hautes performances dans .NET. Dans ce …
>> Today, on ON.NET show, we’re learning about Pipelines, which David and Pavo built. Come learn about it and make your app run faster. [MUSIC] Hey. Welcome to another episode of the ON.NET show. Today, we’re going to talk about System.IO.Pipelines. We’ve got David and Pavo here from the ASP.NET team. >> Hey.
>> So, what is this thing, Pipelines? >> Pipelines came out of us doing work for TechEmpower. So, TechEmpower is the benchmarks for all the web frameworks. >> Right. Like PHP. >> PHP, Node JS, Go, Hi Some, C++. >> Java. >> Java. >> Now, [inaudible].Net’s there. >> Yes. > Yes.
>> We’re number seven now in the latest results. >> Yes. >> That’s, that’s pretty cool. >> Yes. You get paid by how many, you go up each time? >> That’s why, act. >> Yes. Yes. >> I’m trying to get to one. >> But if it goes down, you have to pay us, right?
>> Yes. Yes. That’s how it works. >> Okay. >> Yes. So, we did a bunch of working Kestrel, our web server ASP.NET Core and 2.0 and 2.1 to increase the perf. Out of that work, came this library that’s is at the heart of Kestrel, and it being called a Pipeline
Because the name Channels was taken already. So, we had a lot of interesting names. Pipelines was the one that came out of it. It’s a library term to do high-performance networking in an easy way in.NET. >> Okay. Have we go to the next slide because I think it pretty much says right that.
>> Yes. So, what we saw is in .NET, it was easier to write code that was idiomatic but slow. So, if you were reading from a network, reading from file streams, it’ll be ready easy to write that code but then if you measure the performance, it’ll be bad. To make it good,
You have to add buffer pooling in management and it suddenly got super hard and error-prone. So the goal was, could we make the default implementation fast? We spent much time making the code in Kestrel both like performance and easier to write, for maintenance reasons, and we ended up
With this beautiful graph. You can show. >> Yes. So, this graph basically compares the two apps that do the same thing. They read full lines separated by return from the network stream or from network, and don’t print them on the screen because printing on the screen is slow.
So, we just simulate work that is really often done in different kind of servers, where you have to read some data from network and then find a separator and then parse it. This is basically all Kestrel does, it finds newlines and then splits everything by spaces.
>> Is that what you get paid for? >> Yes. >> Yes, totally. >> Just shuffle texts around,- >> I see. >>- and very basic implementation pipe, you can see pipe implementation skills much better with increased a lot than [inaudible] >> Okay. Right. So what I see is,
As you go to the larger message sizes, you get to about 3x. >> It’s a 2x. It’s more like 2x. >> Okay, 2x. I was always really bad at math. Okay, 2x. >> Yes. Just quickly. The first part of the graph is like that. Because at that point,
It doesn’t matter how fast you are processing things, you’re just not writing things via network fast enough to get enough working on the server. >> Right. So, but I think the takeaway though, isn’t so much that Pipelines was 2x streams. That’s right. To get this performance,
You basically didn’t have to know a lot. Basically,- >> Right. >> -go read at docs.microsoft.com document. >> Yes. >> Follow this pattern and you’re good to go. >> You’re correct. >> Whereas, the streams to get to that level or even higher, you would have to basically go to a 400 level target bill,-
>> Correct. >> -get a manual that was this thick,- >> Yes. >> -and then you should be good [inaudible] on in. >> That’s right. Yes. >> Okay. All right. Make sense. >> Yes. >> Yes. So, let’s jump in and try to write a parse implementation of what we talked about.
Parse implementation of reading lines separated by return from the network stream. >> Right. So, this is the stream based model? >> Yes. >> This is we’re doing a comparison, right? >> Yes. >> Okay. >> So, we’ll start with stream. So, this is the simplest thing you would write when you see a stream.
You read some data into a buffer and then you process line with this data. But there is a bunch of problems with this approach. First, this is a very frequent mistake. People forget that ReadAsync returns how much data it read. So, it could have read one byte and you are
Trying to process 1K of data. Then the other common mistake, and it’s pretty hard for beginners, is that what they do once a single ReadAsync code didn’t give you the entire data you are expecting, so you have to write a loop. So, it gets harder there and what if multiple pieces of
Data comes into a single ReadAsync call? That’s hard too. So, let’s improve our sample a bit and try to observe what ReadAsync returned and try to parse every line that is in the buffer. >> Right. So, this reminds me of- so there are all these APIs in.NET framework,
And I always go for file stream because that’s the one that has Readline on it. >> Yes. >> Because if you use the underlying ones, then you have to use the- >> Stream reader. >> Yes, stream reader, these hard to use APIs, where you have to like pass buffers. >> Yes.
>> It’s like okay. I literally cannot do that. >> It’s simpler, right? >> Yes. >> I might agree to create a loop. Using streams is not very easy to get right. >> Right. For the most part, people assume that things fit and they’re like, well, mostly works in my testing locally? >> Yes.
>> Yes. >> You deploy it and then in this one edge case, the network sends that data then things blow up. >> It’s also very different locally because everything is fast. With some data, you get all of it together, everything fits into a buffer. When you move the production, everything is slow,
Pipes are slow, networks are lagging, things start breaking. So, in this sample, what we did is we looked at what ReadAsync returned, we tried to find delimiter, and then we tried to process line by line looking at delimiters. >> Looking at the code, you can tell there’s a bunch
Of off by one error somewhere here. >> Yes. >> I don’t know where, but there’s a plus one somewhere here missing. So, it’s error prone to these kinds of bugs, where you’re parsing, you’re keeping track of bytes read and bytes consumed and you
Subtract it to get the remainder and then you add something else, and you aren’t sure anymore that the code is all working, right? >> Yes. >> This is code looks better but it doesn’t work. >> You need a million test cases- >> Exactly. >> -to validate that.
>> Yes. So, it only handles up to a kilobyte of data. It is allocating too much data. So, to make this work, you want to resize the buffer as more bytes come in past your limit and there is a new type in.NET Core I think 1.1,
Maybe that we have just added call array pool. >> That’s right. >> There’s a built-in pool of buffers that was made to help situations in the first place. So, we add more code to our sample and it tries to basically read
From the pool and double the input size whenever you hit the limit, and it looks great but it’s still broken. There’s tons of copying, memory is never reduce, so you double the size as you read more data. But guess what? The next round, you’re left with a 1k buffer because you never
Shrunk the buffer as you consumed data that was in it. >> I see. Were you trying to say that when you get to the next line, you’re left with the buffer at the size that it was for the previous line. >> Correct. So, imagine you’ve got like 1K, 1K, 1K, one gig, 1K.
>> Yes. >> The next 1K will be a gig of buffer you’re storing, unless you like write code to compact your memory. >> Right. >> Which you wouldn’t do because that never happens, right? >> Yes. >> Yes. That’s no way to run a server. >> Exactly. So, instead you’re super smart and you
Don’t allocate agenda before you allocate a list of buffers. >> How does that work with Docker limits? >> Oh, man. I have more code to instead, because you’re super clever, you added a list of buffers instead of one buffer. But now, everything as a result got more complicated.
You have to now handle n buffers instead of one buffer, things suddenly get super tricky with errors and returning, so we add to a list if you have enough in that buffer. Otherwise, create a new one, add to a list, keep going.
Then to return buffers to the pool whenever you’re done with them, super tricky at that point. Then, you’re supposed to throw it all of your coordinate star over, and this is a line scan based on a list of buffers instead of one buffer. So, things go off the rails and this doesn’t even
Handle any error handling. Isn’t thread-safe. The reader can be overwhelmed, so you want to have a max size sometimes from data from the client. So, Pipelines aims to solve these features in a very simple way and this is it. >> So, are you serious?
So we saw, I don’t know, a 100 lines of code- >> Yes. >> -on the previous maximum size chart. Are you telling me that these basically four lines of code? >> No. >> Okay. >> I lied. I lied. >> Okay. Okay. >> So, this is part of it.
>> This is the pretty picture? >> Yes. >> Now, this slide is a reality? >> Yes. That’s correct. >> Okay. >> Wow. >> So, the thing Pipes does, Pipes tries to do buffer management on your behalf. So, in this example, so there’s two sides to a pipe, there’s the writer and the reader.
The writer’s job is to take data from some other source, put into a buffer, under that consumes that data. So, in this sample, I’m showing a call to get memory which says, « Give me some memory from the actual underlying data source, the pool or whatever. »
Pass the socket, the socket fills it in and I call « advance », which tells the buffer how much I consumed. Then « flush » makes the data that I just read available to the reader. >> Yes. >> That’s the corpus loop. Then for the read side, I read some data from the pipe.
Notice I don’t parse in the buffer, I get one back and that’s key. Unlike streams, buffers are actually handled by the pipe itself. You just say, « Read data. », and it gives you data. It buffers on your behalf. >> I see. Then, how it was lifetime handled for those buffers?
>> Perfect. So, there is a call to « ReadAsync » and when you haven’t found the actual data or have found it, you call Advance. The Advance two is the key difference between kind of Pipelines and streams, where the pipelines buffers data on your behalf. So, as you’re reading data from the network.
If you don’t have what you’re looking for, you say, okay pipe, I saw 10 bytes, I need 15, I need 20, I need 30, and you call Advance to say, okay, I read this much, I still need more and the Pipe will handle that efficiently,
Store it in a list, it’ll buffer it for you, rotate for you as you consume data, where normally, you would have to do that yourself. >> Right. But when you say Advance- >> Yes. >> -so, you’re holding some data already- >> Yes. >> -with one of these buffers?
Does the lease expire on that buffer, and it goes back to the pool or you can still hold on to this? >> So, when you call Advance, you’re giving control back to the pipe. So, you no longer own it. >> Yes. So, the lease expired, basically? >> Yes, exactly.
>> Okay, so you give that one back, and you, like I saw, Advance, it must return a new buffer to you? >> No. So Read returns buffers and Advance says, I have already consumed this much. So, you give Advanced a pointer to where you’ve consumed up to. >> Oh, I see.
>> So, let’s say you have 4K and you consume 1K, you say Advance, I read 1K. If there’s more data to be had, give me data from 1K onto the rest of the 4K. >> I see. So, your lease only expired on the first 1K, in that example? >> Yes. >> Got it.
>> But you still shouldn’t use any data after you called « Advance. » Just pipe, call the liaison on again and it will give you data back again. Don’t put it into field. Use it later, because when you call Advance, I presume control essentially older density right. >> So, you drop basically that very well-
>> The reference. >> -the reference that you’ve had access to and then Read will do the right thing? >> Yes. >> At a super-high level, think of it as a kind of rent return from a buffer pool, and it kind of hides those matters behind Read and Advance,
Where you need to tell it when you’re done, so they can handle that for you. So, if you consume the data, It will say okay, you assumed it, I’ll head to the pool, and it other covers. >> Great. So you basically, this goes back to the naive implementation again, which is,
You just basically call these two APIs that we saw in this particular example, and you just use whatever Read gives you. >> Yes. >> Every time you call Advance, you- >> It’s done. >> It’s done, and then you go back to Read again. >> Correct. >> All right. So, demos. >> Awesome.
>> Yes. So, for the demo, we have the Server app that we discussed, that reads lines from Network and prints them on the screen. >> So you have a client and [inaudible]. >> On the screen, I have a client and the server. >> These are both, ASP.NET apps or they’re just Console apps?
>> They’re both Console apps. >> Okay. >> You don’t have to use ASP.NET to use pipelines. >> All right. Interesting. >> Okay. So, is there some code in the one that says « Listening On »? That basically says, like, « Please go look at this port to see if there’s something there »?
Or maybe you should do the demo first and then- >> Yes. So what the client does, is forever keypress, it sends this keypress to the server, and then server detects a Newline, it prints it on the screen, and to answer your question, we just use a Socket Class,
And socket [inaudible] method to receive data from the Network,. >> But how does it handle, for example, say you started the listener, five minutes before you started the producer, will it wait? Will it just wait in a loop until the producer? >> Yes. We will wait for a client to arrive,
And then we’ll start the loop to read lines, and right lines. They even solely the ReadAsync wouldn’t [inaudible] until they recommend to it. So the Reader sync loop would just hang on reader thing until the socket booted up and we [inaudible] call [inaudible].
>> Right, so I should think of it as a readline or readkey. >> Yes. >> On the Console? >> Yes. But without working. >> Yes. >> Yes. >> A nonblocking version of that, yes. >> Yes. >> [inaudible]. >> Yes. >> Yes. Let’s look how we handle by [inaudible] there right.
So, before we call the surveys and we ask Pipe for a block of memory to put data into when it arrives via Network. >> How much memory? >> So, the Pipe decides on how much memory you get, there are some settings you can Tweet before creating Pipe,
But then it’s a kind of a session between how much Pipe wants to get and how much Pool that is breaking our memory wants to give. Pool has Mac’s buffer size that they are allowed to give to Pipe, and pipe negotiates in-between how much customer wants? How much it wants to be efficient?
And how much book can go? And you’ll get that and you’re. >> Right. So you could have specified a number? You can hint on a number, but you are not guaranteed to get it, I see. >> So as an example in Castro,
We don’t want to actually use some examples, seven out of 4K, and we read down to like one Byte remaining, you wouldn’t want to use all that buffer to keep reading data because, why do a one Byte read? So, we ask for a minimum of like 2K,
So if you ever get below that 2K, we will get a new buffer. So you can kind of play games with that, and [inaudible] performance things where you want to make sure you have at least this much data to read data from the actual [inaudible] , underlying source.
See that you are going to thrash only one Byte things. >> Because that makes no sense, right? >> Yes, exactly. >> Yes. >> But don’t start with that, it’s right by default, most of the time. That’s enough. >> Yes. >> Yes and after data runs over Network,
We get an amount of data read. >> Yes, so I just got from that « Don’t do what David [inaudible] does » is that the lesson here? >> Yes, there is to always measure for it. >> Okay. >> Measure for it. >> Trust me, and we
Just notify Pipe how much data was read into the buffer, because buffer is just a transparent [inaudible] , byte than array. So it doesn’t know what you wrote into it, and after we notified it, we call flash Async, flash Async would transfer what makes the data readable visible to the reader side.
So, until then reader [inaudible] wouldn’t return, but after we go for a [inaudible] sync, we get onto the reader side. When the result of ReadAsync contains of multiple properties, but the one that we’ll look at right now and the one that’s most important is the buffer,
And you can see it contains our single Byte. >> And it has segments. >> And it has segments that we’ll. Cover later. We’ll cover letter. So you just take thE buffer, there is a bunch of extension methods on it that you can use as alternatives to index off, and it has slice,
So you can take a [inaudible] and pass it around. >> Without getting. Without [inaudible] getting. >> For folks that don’t know what slices is, that’s a method on top of our span clearance, all right, we keep autonomous span of T. >> Yes. >> Isn’t it the span’s slice,
But is the equipment of the span’s life. >> Oh, where you have a buffer and you can insulate it without allocating. >> Got it. >> The subsegment. >> And in this example, we will try to find position of return and when we don’t call Advance too,
Saying that we looked at everything that we got and we want more, but we didn’t consume anything, so next time when new data arrives we want to get everything again, and when we will go through a couple of more keypresses. >> To [inaudible] , point said. >> Okay. So, when you finally.
>> And the buffer gets bigger every time so. >> Buffer so. You should look at buffer and now has three Bytes, instead of having one Byte. >> Yes. >> So normally, you’d have this code in your own code, you call Read with stream, get some data back, and you’d have to
Buffer yourself until the next read. >> [inaudible] , it has to happen in every layer, like if you wrap, Network stream enters a cell stream into JSON Reader, into MVC whatever de-serializer. >> Yes. >> This I need a bit more data and let me buffer it. Happens on so many levels,
And that what pipes [inaudible]. >> Yes. So you can’t be sure who’s buffering [inaudible] so you just buffer it in case. >> Right. So, what the code has written, now imagine you do have this one Gigabyte line or is this buffer just going to keep on growing? >> So, good question. >> Yes.
>> We don’t grow the buffer, we append more buffers, so this read-only sequence structure that you get, is backed by a Linked list of buffers. >> I see. >> That’s why it has its own slice. Because it’s slice would be multiple buffers. So, you don’t have to grow buffers.
When you want to receive more data, and you can get it. One logical piece still, look at it as one logical. >> And there’s two things to unpack from from that left thing, there’s like memory bloat. Do you want to have one gig of one giant array memory and that
Will happen in the larger heap, which is bad. That one happen, but you will still have a gig allocated, I mean, do have default settings in the box so you don’t see you can’t really over-allocate. So I think by default, is a couple K,
Which may be a bit small actually, for some scenarios. >> 32K. >> Yes. >> Any more that, it’ll just tell you like bro, change the settings to be higher. >> Yes. So, there’s probably a horrible analogy. >> Yes. >> But so with CPU we had we brought Async. >> Yes.
>> So that was an a [inaudible] synchrony. >> Yes. >> And then this feels like this is more about data, and is it almost feels like it’s a segment tree or something like that’s the [inaudible]. >> Let’s try [inaudible]. >> Should claim that term >> Yes, a very bad term,
But it feels like it’s basically breaking the data into segments without causing you as the developer to deal with that, because that’s kinda the same thing Async did. >> Yes. >> Because it still allows you to write synchronous code. >> Yes and so this still allows you to build like [inaudible] segment tree.
>> Yes. >> Code, and so you don’t have to deal with the fact that it’s all been broken apart. >> Yes, it’s making stream data. >> [inaudible] consumer’s link. Logical payloads. >> Yes. >> Yes. >> Although streaming data, like if you talk to the C-Sharp team. >> Yes.
>> About C-sharp, hey, they’ll say that Async for each is the new, like Async innumerable. >> Yes. >> Is the new way, this is the new answer for streaming data, so do we get to cancel this project. >> Oh, no. What about, how does this relate to their Async Data Solution,
Streaming Data Solution, that’s what I meant to say. >> There is a world where you could [inaudible] for each Async over the data coming in, on the reader side. >> Yes. >> Where you [inaudible] for each over the buffers coming in. The only the weird part was that,
You’d have to call Advance yourself at every stage in the loop, but we are playing with like ideas on how to make this be [inaudible] compiled. >> Were on the producer’s side, could you do yield return into a pipeline? >> That’ll be interesting. It will be buffers,
But probably not out of the box, but there could be an Adapter. >> Okay. >> Yes. >> So, I think if, a lot of the time you can have for each data that’s inside the pipe, because you have to have enough before you decide what to do. >> I see.
>> If all data was like that, that you can just reach over it, we wouldn’t have a pipe at all, because then you don’t need to worry about buffering, you just read into the same small buffer and you print it on the screen, right? If you want to print stream onto the screen,
There are no problems like with this crap in this sample. You just read it, and print it, and it’s gone. That’s where it’s like not exactly fits together. >> I see. Okay. >> Yes. >> But, we are thinking about ways to integrate it better. Now, that you find return,
You call this method that unfortunately has to be aware that buffer is not a single segment to prevent us from allocating. You can always call Buffer.array, but then you allocate an array. But, the advantage of having this is that you can avoid it by just calling right on every segment.
>> So, we have this first last type called read on a sequence that hasn’t done it, I think 2.1 that we have introduced. The whole idea was that there’s a type in the BCL that represents a multi-segment buffer list, and the goal is that we would have a bunch
Of APIs that take this in by default, so you wouldn’t have to unpack it yourself for every single thing. In.NET Core 3.0, we’re adding this type called Sequence Reader that you pass in a buffer like this to a SQL feeder, and it gives you helpers for reading like ints and dates,
Those kinds of things in each of these, so you don’t have to write the code yourself to unpack this segments and do things yourself. Today is little bit tricky, but that type should help us have probably write parsers more trivially for pipelines. >> Right. Now, these read-only segment. >> Sequence. >> Sequence.
>> Yes. >> So, was that a structure or is it a reference types? >> This is structure. >> Okay, right. >> We are going to allocate. >> Yes, exactly. That’s where I was headed. So, as we for reached through there, there’s no allocations or current server. >> Correct.
>> The only place where pipe allocates is when you ask for memory and read. Our case as in asks spoof for more memory. After that, reading is completely free. So, no matter how many times you. >> Right. So, from a reliable software standpoint, once you ask for that memory,
If you pass that memory test in terms of having that much memory available, then you should just be golden from that point on and not have to worry about ohms or anything, like that. Awesome. >> So, to recap a little bit if you have time left,
Some of the things that made us go with pipes, are some things that became standard in.NET Core in general. So, we had this idea that in.NET, you can write a custom- aware. So, if you want to have a thing be awaitable, you can make your own thing and have it awaitable,
And pipe style being that way, but then we convince people to introduce a more generic way to have a single object that can be read multiple times without having to allocate for every single weight. Because one of the key things we saw in the perf before was every single reader single socket would
Allocate a task, that’s unfortunate right. What if you could just have the socket represent the actual they stay on the read, and you read over and over and over, and you only allocate the socket, nothing else. So, in.NET Core 2.1, pipelines has that behavior where you allocate
Once per object instead of once per operation, so it’s web sockets, so the sockets. So, we improve the whole system by having the performance pushes that we can assign pipelines as being helpful. So, viotas now supports a backing I viotas stores, which pipelines implements to make the whole thing allocation for you.
That’s a huge deal. >> Yes, I remember reading some of the. >> Back and forth. >> The back and forth on that sore back, that was exciting. >> It was a good. >> Yes, the other thing about pipes is we keep talking about network, but we call it Async buffer management primitive,
Not network in primitive. So, if you are reading from file and you still want to get the same benefits, you can. It’s not tied to streams or network or anywhere or anything else. >> Yes. >> It’s just you put data in you get date out if you want to do it
Efficiently without allocating and it’s bites effort. >> In 3.0, and it’s been Core 3.0 are actually exposing pipes on the HB contexts in a first-class way. So, we’ll have request.body, and body will be a pipe. That’s one big thing that we’re at least supersensitive of happening in 3.0.
The big idea is that we will be able to use the same buffers from the web server all the way through the middleware, all the way through the MVC, we’ll have the copy. Right now, we use different pools at different parts and the stop because that’s Castro and adds its own pool,
And this is like ABC and has its own pool, but we’re using the same pool all the way through. So, we were hoping it would be like a huge. >> So, we finally built a platform that Netflix can use. Is that a fair analogy? >> Yes. >> Wow. >> Yes.
>> That makes sense. Okay. So, what would you like to leave us with? So, I usually what we’ll talk about is where do you go to get started, and if you want participate in this at a lower level like are there any design conversations that
Are still going on for 3.0 that people can participate in? >> Yes bring up. So, the first thing you get the Oregon Brantley, that Oregon show there. >> Oh yes. We forgot the package. >> We forgot to talk about that. So, this is the package as opposed to being something
That’s part of.Net Core app. >> Correct. So, it’s a package. Isn’t a standard 2.0. I think it’s 2.0 minimum I think and it runs on.NET Framework. It runs on the.NET Core standards, so runs everywhere. >> Yes, so click on dependencies that’s what I always look at to, yes, there we go.
>> Okay, so it’s 1.3. >> Runs on. >> Same as on 4.6 and upwards. >> Okay. So, you must get additional functionality in.NET standard 2.0 versus 1.3? >> It’s mostly prox optimizations, not functionality. >> On.NET Core app, this thing is super optimized, and it will get more optimized on.NET Core
App like three as we add more features. >> Right. >> The rules are almost just lower. They’re the same features up to today. >> Okay, and are there any good samples to look at? >> Yes, there’s an amazing sample that, Aira, you should give. >> Okay, which sample are you talking about?
>> Yes, the other one I just butchered. >> Oh. >> Yes. Right. There’s the blog post you wrote. We can link to that in the show notes. >> Yes. There’s a bubbles may mean, David fail. Nine. You’re going to make me think. >> My, it’s like never worked with you before.
>> This sample that shows a TCP server and our client using pipeline. >> Is that basically the same sample? >> Then, there’s a blog post I will link to, and then there’s Mark Caravel from Stack Overflow. >> Oh, yes. >> Who is like the biggest pipelines fan ever.
Yesterday we wrote StackExchange.Redis 2.0 on pipelines. >> So, did they ever finish, and maybe this is a horrible thing to ask because maybe they didn’t, but did they ever finish their migration? Weren’t they doing like this big migration? >> To Core? >> Well, but I thought it was to
Pipelines in particular and they had. >> They haven’t. >> They’re like, « Oh my God. Now, we’re allocating 10 terabytes of data where it was only one kilobyte before. >> They had a bug. They shipped though. >> Okay. >> I’m not okay, so there’s two things. They imported Redis to pipelines,
That’s done and shipped, and we have it back and forth. >> It’s all awesome. >> So, awesome. It’s much faster, they erased an entire library that they had before, too many sockets, and now let us use the pipelines because it has all of the things within to it.
>> Wow! Now, they basically have you guys. >> Yes. >> [inaudible]. >> Exactly. >> You’re on their team now? >> The memory thing is about deporting stack overflows web socket server to ASP.NET Core, and they saw their Jen zeros were ginormous. >> That’s the thing they were talking about recently,
And we’re still fixing that. >> Yes, that’s still like a thing. >> A wow. >> Yes. Okay. Good times. Okay guys, thanks for coming on the show. >> Awesome. >> Pipeline sounds awesome. >> It’s awesome if you use it. >> Yes. >> For everything. >> For everything. Okay. Well, today we
Learned about system IO pipelines from David and Pavo, and we hope you liked what you saw. Thanks. >> Thank you. >> Thanks. [MUSIC]