2020-04-15 20:29:59 +02:00
|
|
|
from django.dispatch import receiver
|
2020-05-31 18:34:51 +02:00
|
|
|
from django.db.models.signals import post_save, post_delete
|
implement active srs sync and celery
reworks the entire logic how active streams are being tracked.
instead of keeping a counter by listening only to srs callback
hooks, portier will now actively scrape the http api of known
srs nodes to retrieve information about all currently existing
streams on a srs node. this prevents portier from being wrong
about active stream counts due to drift, and allows us to show
more information about stream data to users in the future,
as the srs api will also expose information about used codecs,
stream resolution and data rates as seen by srs itself.
to implement this, the previous remains of celery have been
made active again, and it is now required to run exactly one
celery beat instance and one or more celery workers beside
portier itself. these will make sure that every 5 seconds all srs
nodes are actively being scraped, on top of the scrape that is
triggered by every srs callback hook.
this keeps the data always superfresh.
the celery beat function allows us to implement cron-based
automation for many other functions (restream, pull, etc) in
the future as well, so it's okay to pull in something more heavy
here rather than just using a system cron and executing a
custom management command all the time.
2024-02-29 18:33:52 +01:00
|
|
|
from config.models import Restream, Stream
|
|
|
|
from config.signals_shared import stream_active, stream_inactive
|
2020-04-26 19:46:58 +02:00
|
|
|
from concierge.models import Task
|
2020-04-15 20:29:59 +02:00
|
|
|
|
2020-04-26 19:46:58 +02:00
|
|
|
@receiver(stream_active)
|
implement active srs sync and celery
reworks the entire logic how active streams are being tracked.
instead of keeping a counter by listening only to srs callback
hooks, portier will now actively scrape the http api of known
srs nodes to retrieve information about all currently existing
streams on a srs node. this prevents portier from being wrong
about active stream counts due to drift, and allows us to show
more information about stream data to users in the future,
as the srs api will also expose information about used codecs,
stream resolution and data rates as seen by srs itself.
to implement this, the previous remains of celery have been
made active again, and it is now required to run exactly one
celery beat instance and one or more celery workers beside
portier itself. these will make sure that every 5 seconds all srs
nodes are actively being scraped, on top of the scrape that is
triggered by every srs callback hook.
this keeps the data always superfresh.
the celery beat function allows us to implement cron-based
automation for many other functions (restream, pull, etc) in
the future as well, so it's okay to pull in something more heavy
here rather than just using a system cron and executing a
custom management command all the time.
2024-02-29 18:33:52 +01:00
|
|
|
def create_restream_tasks(sender, **kwargs):
|
2020-04-26 19:46:58 +02:00
|
|
|
stream = Stream.objects.get(stream=kwargs['stream'])
|
implement active srs sync and celery
reworks the entire logic how active streams are being tracked.
instead of keeping a counter by listening only to srs callback
hooks, portier will now actively scrape the http api of known
srs nodes to retrieve information about all currently existing
streams on a srs node. this prevents portier from being wrong
about active stream counts due to drift, and allows us to show
more information about stream data to users in the future,
as the srs api will also expose information about used codecs,
stream resolution and data rates as seen by srs itself.
to implement this, the previous remains of celery have been
made active again, and it is now required to run exactly one
celery beat instance and one or more celery workers beside
portier itself. these will make sure that every 5 seconds all srs
nodes are actively being scraped, on top of the scrape that is
triggered by every srs callback hook.
this keeps the data always superfresh.
the celery beat function allows us to implement cron-based
automation for many other functions (restream, pull, etc) in
the future as well, so it's okay to pull in something more heavy
here rather than just using a system cron and executing a
custom management command all the time.
2024-02-29 18:33:52 +01:00
|
|
|
restreams = Restream.objects.filter(active=True, stream=stream)
|
|
|
|
for restream in restreams:
|
|
|
|
Task.objects.get_or_create(stream=restream.stream, type='restream', config_id=restream.id,
|
|
|
|
configuration=restream.get_json_config())
|
2020-05-31 12:16:09 +02:00
|
|
|
|
|
|
|
|
2024-02-27 20:44:35 +01:00
|
|
|
@receiver(post_save, sender=Restream)
|
implement active srs sync and celery
reworks the entire logic how active streams are being tracked.
instead of keeping a counter by listening only to srs callback
hooks, portier will now actively scrape the http api of known
srs nodes to retrieve information about all currently existing
streams on a srs node. this prevents portier from being wrong
about active stream counts due to drift, and allows us to show
more information about stream data to users in the future,
as the srs api will also expose information about used codecs,
stream resolution and data rates as seen by srs itself.
to implement this, the previous remains of celery have been
made active again, and it is now required to run exactly one
celery beat instance and one or more celery workers beside
portier itself. these will make sure that every 5 seconds all srs
nodes are actively being scraped, on top of the scrape that is
triggered by every srs callback hook.
this keeps the data always superfresh.
the celery beat function allows us to implement cron-based
automation for many other functions (restream, pull, etc) in
the future as well, so it's okay to pull in something more heavy
here rather than just using a system cron and executing a
custom management command all the time.
2024-02-29 18:33:52 +01:00
|
|
|
def update_restream_tasks(sender, **kwargs):
|
2020-05-31 12:16:09 +02:00
|
|
|
instance = kwargs['instance']
|
|
|
|
# TODO: check for breaking changes using update_fields. This needs custom save_model functions though.
|
|
|
|
|
|
|
|
# Get the current task instance if it exists, and remove it
|
|
|
|
try:
|
2020-05-31 15:22:05 +02:00
|
|
|
task = Task.objects.filter(type='restream', config_id=instance.id).get()
|
2020-05-31 12:16:09 +02:00
|
|
|
task.delete()
|
|
|
|
except Task.DoesNotExist:
|
|
|
|
pass
|
2020-04-26 22:25:58 +02:00
|
|
|
|
2020-05-31 12:16:09 +02:00
|
|
|
# If the configuration is set to be active, and the stream is published, (re)create new task
|
|
|
|
if instance.active and instance.stream.publish_counter > 0:
|
|
|
|
task = Task(stream=instance.stream, type='restream', config_id=instance.id,
|
|
|
|
configuration=instance.get_json_config())
|
2020-04-26 19:46:58 +02:00
|
|
|
task.save()
|
2020-05-31 15:20:22 +02:00
|
|
|
|
|
|
|
|
2024-02-27 20:44:35 +01:00
|
|
|
@receiver(post_delete, sender=Restream)
|
implement active srs sync and celery
reworks the entire logic how active streams are being tracked.
instead of keeping a counter by listening only to srs callback
hooks, portier will now actively scrape the http api of known
srs nodes to retrieve information about all currently existing
streams on a srs node. this prevents portier from being wrong
about active stream counts due to drift, and allows us to show
more information about stream data to users in the future,
as the srs api will also expose information about used codecs,
stream resolution and data rates as seen by srs itself.
to implement this, the previous remains of celery have been
made active again, and it is now required to run exactly one
celery beat instance and one or more celery workers beside
portier itself. these will make sure that every 5 seconds all srs
nodes are actively being scraped, on top of the scrape that is
triggered by every srs callback hook.
this keeps the data always superfresh.
the celery beat function allows us to implement cron-based
automation for many other functions (restream, pull, etc) in
the future as well, so it's okay to pull in something more heavy
here rather than just using a system cron and executing a
custom management command all the time.
2024-02-29 18:33:52 +01:00
|
|
|
def delete_restream_tasks(sender, **kwargs):
|
2020-05-31 15:20:22 +02:00
|
|
|
instance = kwargs['instance']
|
|
|
|
# Get the current task instance if it exists, and remove it
|
|
|
|
try:
|
2020-05-31 15:22:05 +02:00
|
|
|
task = Task.objects.filter(type='restream', config_id=instance.id).get()
|
2020-05-31 15:20:22 +02:00
|
|
|
task.delete()
|
|
|
|
except Task.DoesNotExist:
|
|
|
|
pass
|