<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 6/6/2018 3:51 AM, Jerry Leichter
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:9ED78B91-8AB8-4D1B-8BBD-79C6E9F54D42@lrw.com">
<pre wrap="">Now Apple has announced that the next release of iOS will support group connections with up to 32 members. I'm wondering "how they plan to do that".
There are two components to the problem:
1. Doing the necessary computations. Recent iPhones certainly have the necessary compute power to handle mixing of 32 audio streams. They also have GPU's that should be able to handle the compositing and other video processing.
2. But ... you can only do computation <b class="moz-txt-star"><span class="moz-txt-tag">*</span>on data you actually receive<span class="moz-txt-tag">*</span></b>. Sending all 31 audio streams to all phones at all times seems plausible. Thirty one full video streams seems unrealistic.</pre>
</blockquote>
<br>
I had to solve a very similar engineering problem for a chat
application on the X-box, about 10 years ago. What we did was
organize the participants in a spanning tree, picking the devices
with the network connections as nodes and the devices with the worse
connections as leaves. We controlled the branching of the tree so as
to not overwhelm the nodes. You get a structure in which each
node-to-node or leaf-to-node transmission is encrypted, but of
course each member sees the whole context. The spanning tree worked
fine in practice, although of course latency becomes a function of
the diameter of the graph. I have no idea what the folks at Apple
chose to do, but yes, there are solutions.<br>
<br>
-- Christian Huitema<br>
</body>
</html>