Jump to content

Search the Community

Showing results for tags 'javascript sdk'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Ant Media Connect
    • Dashboard and features
    • Installation & Configuration
    • SDK integration & APIs
    • News & announcements
    • Licensing
    • Hire a consultant
    • Streaming general discussion
    • Applications & plugins

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me

Found 7 results

  1. Hi, I'm trying to implement a conference call application on my own and I'm having issues with streams sometimes disappearing, Example: Client 1 enters conference call and sits there for a while Client 2 enters conference call and does not see or hear Client 1. However, Client 1 can see and hear Client 2. Client 1 leaves and re-enters the conference call and now both see each other. As you can see this gets exponentially bad the more people are in the call and people don't know who sees them and who they are missing. And eventually someone disappears etc. I have looked at the code in ant-media/conference-call-application: Conference Call Application over Ant Media Server (github.com) and there is just so much code, which I feel is unnecessary to my use case, and also I would like to use my own UI library and CSS, and develop the entire application from scratch. It would be nice if there was a barebones react application example for a stable conference call without any extra features. I can provide some of the code for my application: (This is just relevant part of the code) Also this is using TypeScript; import React, { useEffect, useRef, useState } from 'react'; import { WebRTCAdaptor } from '@antmedia/webrtc_adaptor'; import useStateRef from 'react-usestateref'; import ReactPlayer from 'react-player'; type Stream = { id: string; src: string; }; let webRTCAdaptor: WebRTCAdaptor | null = null; type Props = { }; export default function SessionReworked({}: Props) { let roomTimerId = React.useRef<NodeJS.Timeout | null>(null); // video stream related const webSocketUrl = ''; const adaptorReadyRef = useRef<boolean>(false); const localVideoRef = useRef<HTMLVideoElement | null>(null); const [localMediaStream, setLocalMediaStream] = React.useState<MediaStream | null>(null); var [streamsList, setStreamsList, ref] = useStateRef<Stream[]>([]); var [localStreamId, setLocalStreamId, localStreamIdRef] = useStateRef<string>(''); // Buttons and other UI/functionality related hooks const [hasJoined, setHasJoined] = useState<boolean>(false); const [isCameraOff, setIsCameraOff] = useState<boolean>(false); const [isMuted, setIsMuted] = useState<boolean>(false); const getVideo = () => { if (localVideoRef.current) { navigator.mediaDevices .getUserMedia({ video: { width: 300 } }) .then((stream) => { setLocalMediaStream(stream); let video = localVideoRef.current; console.log('video, stream', video, stream); video.srcObject = stream; video.play(); }) .catch((err) => { console.error('error:', err); }); } }; useEffect(() => { // get webcam video getVideo(); }, [localVideoRef, hasJoined, isCameraOff]); useEffect(() => { webRTCAdaptor = new WebRTCAdaptor({ websocket_url: webSocketUrl, mediaConstraints: { video: true, audio: true, }, peerconnection_config: { iceServers: [{ urls: 'stun:stun1.l.google.com:19302' }], }, sdp_constraints: { offerToReceiveAudio: false, offerToReceiveVideo: false, }, localVideoId: 'localMediaStream', // <video id="id-of-video-element" autoplay muted></video> bandwidth: 'unlimited', // default is 900 kbps, string can be 'unlimited' dataChannelEnabled: true, // enable or disable data channel debug: true, callback: (info, obj) => { const room = obj?.ATTR_ROOM_NAME; switch (info) { case 'pong': // ping-pong is a mechanism to keep the websocket connection alive console.log('pong', info, obj, room); break; case 'joinedTheRoom': adaptorReadyRef.current = true; // called when You join the room console.log('joinedTheRoom', info, obj, room); webRTCAdaptor.publish(obj.streamId, room); setLocalStreamId(obj.streamId); obj.streams?.forEach((stream) => { webRTCAdaptor.play(stream, null, room); }); addStreamToStreamsListNoDuplicates({ id: obj.streamId, src: obj.streamId }); roomTimerId.current = setInterval(() => { webRTCAdaptor.getRoomInfo(roomId, obj.streamId); }, 1000); break; case 'roomInformation': // Used to get room information, will be called when you join the room and every 5 seconds with the interval console.log('roomInformation', info, obj, room); // Add new streams to the list if they don't already exist in the list for (let str of obj.streams) { webRTCAdaptor.play(str, null, room); } break; case 'newStreamAvailable': console.log('newStreamAvailable', info, obj, room); let newStream: Stream = { id: obj.streamId, src: obj.stream }; addStreamToStreamsListNoDuplicates(newStream); webRTCAdaptor.play(obj.streamId, null, room); webRTCAdaptor.enableAudioLevel(obj.stream, obj.streamId); break; case 'leavedFromRoom': if (roomTimerId.current) { clearInterval(roomTimerId.current); } setStreamsList([]); break; case 'closed': console.log('closed', info, obj); break; case 'play_finished': console.log('play_finished', info, obj); webRTCAdaptor.stop(obj.streamId); removeStreamFromStreamsList(obj.streamId); break; default: console.log('default', info, obj); break; } }, // check info callbacks bellow callbackError: function (error, message) { console.log('errcallback error:', error); console.log('errcallback message:', message); }, // check error callbacks bellow }); return () => { clearInterval(roomTimerId.current); if (webRTCAdaptor && adaptorReadyRef.current === true) { webRTCAdaptor.leaveFromRoom(roomId); webRTCAdaptor?.stop(); } }; }, []); const addStreamToStreamsListNoDuplicates = (stream: Stream) => { const streamId = stream.id; const streamSrc = stream.src; if (ref.current.some((s) => s.id === streamId)) { return; } setStreamsList((prev) => [...prev, { id: streamId, src: streamSrc }]); }; const removeStreamFromStreamsList = (streamId: string) => { setStreamsList((prev) => prev.filter((s) => s.id !== streamId)); }; function _leaveRoom() { webRTCAdaptor.leaveFromRoom(roomId); } const _publishStream = () => { const generatedStreamId = `${roomId}-${activeAccount?.idTokenClaims?.oid || nanoid(5)}`; webRTCAdaptor.joinRoom(roomId, generatedStreamId); }; const createRemoteVideoReact = (streamId, src, index) => { return ( <> <ReactPlayer id={`remoteVideo${streamId}`} url={src || ''} playsinline playing={true} controls={true} height="100%" width="100%" style={{ height: '100%', width: '100%' }} /> </> ); }; const createLocalVideoPlayer = () => { return ( <> <ReactPlayer id={`localMediaStream`} url={localMediaStream || ''} playsinline playing={true} muted={true} controls={true} height="100%" width="100%" style={{ height: '100%', width: '100%' }} /> </> ); }; // and just render the video elements. return ( <div className={`grid grid--${streamsList.length}`}> {streamsList?.length > 0 && streamsList.map((stream: Stream, index: number) => { console.log('StreamsList map', stream); if (stream.src && stream.id && stream.id !== localStreamId) { return ( <div key={`video-${index}`} className={`video-wrapper video-#${index}`}> {createRemoteVideoReact(stream.id, stream.src, index)} </div> ); } })} <div key={`video-${streamsList.length}`} className={`video-wrapper video-#${streamsList.length}`} > {createLocalVideoPlayer()} </div> </div> ); } If anyone can spot what I'm doing wrong, or take my code and try to implement it themselves to see what I'm doing wrong. I can also send the full code with UI components, and access to the deployed frontend, if someone wants to help me. Or if there is already existing code I can use that would be helpful. Thanks.
  2. Hello, everyone. I'm using JavaScript SDK to create a web conference client app where there are few publishers and many (hundreds to thousands) viewers inside each SFU conference room. So I intend to use Ant Media Server in a cluster mode. Could you please clarify some questions: 1. I want each publisher to be able to view streams of other publishers of the same conference room. Do I need two different connections (two instances of WebRTCAdaptor) in that case: one for publishing (origin server) and one for viewing (edge server)? Or it can be done via single connection to the origin server? 2. Is it required for SFU conference room that all publishers of the room are connected to the same origin server?
  3. Hello, everyone. I wonder if it is possible to play the stream which I am currently publishing from the server via WebRTC (let's say for diagnostic sake) instead of just playing the stream from webcam directly? I'm trying to call webRTCAdaptor.play() passing my own streamId in publish_started callback, but getting "notSetRemoteDescription" error which states: "webrtc_adaptor.js:908 set remote description is failed with error: InvalidAccessError: Failed to execute 'setRemoteDescription' on 'RTCPeerConnection': Failed to set remote offer sdp: The order of m-lines in subsequent offer doesn't match order from previous offer/answer." Am I doing something wrong or there is no such possibility at all?
  4. Hello, everyone. In /LiveApp/conference.html sample there are three modes available: camera, screen and camera+screen. What is seem strange and inconvenient to me is the fact that we see permission prompt when we switching between screen and camera+screen modes (and vice versa) because in real life we're probably do not want to select other screen source when switching between these modes. So I wonder is there any possibility in WebRTCAdaptor to use previous stream and avoid prompting user for permission every time they switch between screen and camera+screen and back?
  5. Hello, everyone. Using JavaScript SDK, when we are broadcasting in "screen+camera" mode, both streams are combined into one on the client side and sent to the server as a single stream. Is there a way to broadcast camera and screen streams as a separate streams/tracks simultaneously?
  6. I have an HTML page that uses the javascript WebRTCAdaptor to broadcast to an Ant Media Server application. Is there a way for that page to publish the broadcast to two (or more) different Ant Media Server applications simultaneously? If so, can it be done without sending two identical streams of data and doubling the bandwidth? Thanks
  • Create New...