1
0
mirror of https://github.com/craigerl/aprsd.git synced 2024-11-10 10:33:31 -05:00

Compare commits

..

194 Commits

Author SHA1 Message Date
98a62102b7 Don't break logging aprslib failures
this patch removes the newline when logging failures to parse
aprs packets in aprslib
2024-11-08 13:47:02 -05:00
7d1e739502 Added new features to listen command.
Changed the default to not log incoming packets.  If you want to see
the packets logged, then pass in --log-packets.

Added the ability to specify a list of plugins to load by passing in
--enable-plugin <fully qualified python path to class>
You can specify --enable-plugin multiple times to enable multiple
plugins.

Added new switch to enable the packet stats thread logging of stats
of all the packets seen.  --enable-packet-stats.  This is off by
default.
2024-11-08 13:28:46 -05:00
bd0bcc1924 Fixed the protocol for Stats Collector
The stats() method had an inconsistent name for serializable.
2024-11-08 13:22:53 -05:00
adcf94d8c7 Catch and log exceptions in consumer
This patch adds a try except block around the APRSIS
consumer.  This gives us a chance to log the specific
exception, so we can see why the consumer failed.
2024-11-08 13:21:38 -05:00
9f3c8f889f Allow loading a specific list of plugins
Updated the PluginManager to allow only activating a
specific list of plugins passed in, instead of what is
in the config file.
2024-11-08 13:20:42 -05:00
6e62ac14b8 Allow disabling sending all AckPackets
This patch adds a new config option 'enable_sending_ack_packets', which
is by default set to True.  This allows the admin to disable sending Ack
Packets for MessagePackets entirely.
2024-11-06 18:21:46 -05:00
d0018a8cd3 Added rich output for dump-stats
this patch adds table formatted output for the stats in the
aprsd dump-stats command.  You can also show the stats in raw json/dict
format by passing --raw.  You can also limit the sections of the
stats by passing --show-section aprsdstats
2024-11-06 11:39:50 -05:00
2fdc7b111d Only load EmailStats if email is enabled
This patch updates the stats collector to only register the EmailStats
when the email plugin is enabled.
2024-11-06 08:43:25 -05:00
229155d0ee updated README.rst
this patch includes information on building your own
plugins for APRSD
2024-11-05 20:49:11 -05:00
7d22148b0f
Merge pull request #181 from craigerl/unit-tests
Added unit test for client base
2024-11-05 20:48:27 -05:00
563b06876c fixed name for dump-stats output
Also added a console.stats during loading of the stats
2024-11-05 20:15:52 -05:00
579d0c95a0 optimized Packet.get() 2024-11-05 15:04:48 -05:00
224686cac5 Added unit test for APRSISClient 2024-11-05 13:39:44 -05:00
ab2de86726 Added unit test for ClientFactory 2024-11-05 12:32:16 -05:00
f1d066b8a9 Added unit test for client base
This patch adds a unit test for the APRSClient base class.
2024-11-05 12:15:59 -05:00
0be87d8b4f Calculate delta once and reuse it 2024-11-05 11:54:07 -05:00
d808e217a2 Updated APRSClient
Added some doc strings and some types for returns as well
as an exception catching around create_client
2024-11-05 11:46:50 -05:00
7e8d7cdf86 Update PacketList
This patch updates some of the code in PacketList to be
a bit more efficient.  Thanks to the Cursor IDE :P
2024-11-05 11:34:12 -05:00
add18f1a6f Added new dump-stats command
This new command will dump the existing packetstats from the
last time it was written to disk.
2024-11-05 11:33:19 -05:00
c4bf89071a
Merge pull request #180 from craigerl/walt-listen-test
Walt listen test
2024-11-05 11:32:38 -05:00
df0ca04483 Added some changes to listen
to collect stats and only show those stats during listen
2024-11-05 11:29:44 -05:00
3fd606946d Fix a small issue with packet sending failures
When a packet _send_direct() failed to send due to a network
timeout or client issue, we don't want to count that as a send
attempt for the packet.  This patch catches that and allows for
another retry.
2024-10-31 18:10:46 -04:00
dependabot[bot]
fbfac97140 Bump werkzeug from 3.0.4 to 3.0.6
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.4 to 3.0.6.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.4...3.0.6)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-31 18:10:45 -04:00
f265e8f354 Fix a small issue with packet sending failures
When a packet _send_direct() failed to send due to a network
timeout or client issue, we don't want to count that as a send
attempt for the packet.  This patch catches that and allows for
another retry.
2024-10-31 17:42:43 -04:00
d863474c13 Added some changes to listen
to collect stats and only show those stats during listen
2024-10-31 09:17:36 -04:00
993b40d936
Merge pull request #178 from craigerl/dependabot/pip/werkzeug-3.0.6
Bump werkzeug from 3.0.4 to 3.0.6
2024-10-29 12:35:17 -04:00
0271ccd145 Added new aprsd admin command
This patch adds the aprsd admin command back.
If you don't have about lots of web traffic, then use
aprsd admin to start the admin interface.
2024-10-29 12:30:19 -04:00
578062648b Update Changelog for v3.4.3 2024-10-29 11:08:27 -04:00
ecf30d3397 Fixed issue in send_message command
Send Message was using an old mechanism for logging ack packets.
This patch fixes that problem.
2024-10-29 09:52:39 -04:00
882e90767d Change virtual env name to .venv 2024-10-29 09:52:18 -04:00
dependabot[bot]
0ca62e727e
Bump werkzeug from 3.0.4 to 3.0.6
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.4 to 3.0.6.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.4...3.0.6)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-26 00:26:57 +00:00
14274c93b5 3.4.2 2024-10-18 16:08:09 -04:00
14c0a699cb Cleanup test failures 2024-10-18 12:25:16 -04:00
c12c42b876 cleaned up some requirements
we don't really need gevent, eventlet.
those are only needed for the web admin interface
2024-10-18 12:25:06 -04:00
765e02f5b3 Collector cleanup 2024-10-18 12:07:02 -04:00
8cdbf18bef Add final stages in Dockerfile
This patch adds another final stage in the Dockerfile
2024-10-17 17:10:59 -04:00
a65262d2ff Sort changelog commits by date 2024-10-17 17:10:03 -04:00
9951b12e2d Log closing client connection.
This patch updates the aprsis connection client to add logging
when the close() happens
2024-10-17 17:09:11 -04:00
3e9bf2422a Added packet log distance and new arrows
this patch adds unicode arrows during logging of packet arrows
(tx/rx) and adds distance for GPSPackets
2024-10-17 17:06:28 -04:00
5e9f92dfa6 Added color logging of thread names at keepalive
This patch adds logging of the thread name in color
during keepalive loop output.
2024-10-17 17:04:33 -04:00
5314856101 Removed dumping of the stats on exit
This patch removes the logging of the raw stats dict when the commands
exit.
2024-10-17 17:01:36 -04:00
758007ea3f Removed remnants of QueryPlugin
QueryPlugin was removed a while back after the stats rework.
This patch removes the config options for the Query plugin
2024-10-03 10:34:35 -07:00
a74a66d9c3 Update Changelog 2024-09-23 17:10:35 -04:00
a5dc322066 Removed invalid pyproject classifier 2024-09-23 17:09:43 -04:00
9b843eead9 Update ChangeLog 2024-09-23 17:06:06 -04:00
e5662b95f8 Build > python 3.10 2024-09-16 11:58:54 -04:00
a6f84e42bc retagged v3.4.1 in prep for release 2024-09-16 11:46:53 -04:00
e3ab6e7f59 No limit on change log commits 2024-09-16 11:45:27 -04:00
af3d741833 Rebuild ChangeLog 2024-09-16 11:44:37 -04:00
b172c6dbde Fixed pep8 with packet_list
Small python issue with a pop8 violation.
2024-09-16 11:36:49 -04:00
9d3f45ac30 Updated requirements
for dev and runtime
2024-09-16 11:36:00 -04:00
49e8a622a7 added m2r package to dev requirements
Since we need m2r for converting the ChangeLog.md to rst
added the m2r package for doing that only for the dev requirements
2024-09-16 11:34:00 -04:00
92cb92f89c Update base docs
This patch updates some of the docs files for 3.4.x
2024-09-16 11:33:26 -04:00
37415557b5 Updated Makefile to build Changelog
this patch updates the way we build the changelog
to use the npm auto-changelog
2024-09-16 11:31:29 -04:00
5ebbb52a2c Renamed Changelog 2024-09-16 11:21:01 -04:00
673b34c78b Use auto-changelog to generate changelog
Since we removed pbr, which also generates the changelog,
we had to find a solution for generating the changelog.
The best solution was to use the npm package auto-changelog

https://github.com/CookPete/auto-changelog
2024-09-16 10:56:54 -04:00
ffa28fa28a Fixed reference to ThirdPartyPacket
This patch fixes a derefence to the core.ThirdPartyPacket from
issue https://github.com/craigerl/aprsd/issues/165
2024-09-16 09:28:31 -04:00
93f752cd6d
Merge pull request #170 from craigerl/dependabot/pip/zipp-3.19.1
Bump zipp from 3.18.2 to 3.19.1
2024-08-20 11:24:47 -04:00
b5aa187d54
Merge pull request #169 from craigerl/dependabot/pip/certifi-2024.7.4
Bump certifi from 2024.2.2 to 2024.7.4
2024-08-20 11:24:37 -04:00
616cd69a2d
Merge pull request #168 from craigerl/dependabot/pip/urllib3-2.2.2
Bump urllib3 from 2.2.1 to 2.2.2
2024-08-20 11:24:24 -04:00
4b26e2b7f7 update to pyproject 2024-07-26 12:11:33 -04:00
f07ef71ce0 Hack Dockerfile for admin fixes? 2024-07-25 10:41:21 -04:00
dependabot[bot]
ee0c546231
Bump zipp from 3.18.2 to 3.19.1
Bumps [zipp](https://github.com/jaraco/zipp) from 3.18.2 to 3.19.1.
- [Release notes](https://github.com/jaraco/zipp/releases)
- [Changelog](https://github.com/jaraco/zipp/blob/main/NEWS.rst)
- [Commits](https://github.com/jaraco/zipp/compare/v3.18.2...v3.19.1)

---
updated-dependencies:
- dependency-name: zipp
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-09 19:27:37 +00:00
dependabot[bot]
ba4d9bb565
Bump certifi from 2024.2.2 to 2024.7.4
Bumps [certifi](https://github.com/certifi/python-certifi) from 2024.2.2 to 2024.7.4.
- [Commits](https://github.com/certifi/python-certifi/compare/2024.02.02...2024.07.04)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-06 01:37:51 +00:00
dependabot[bot]
6d294113f8
Bump urllib3 from 2.2.1 to 2.2.2
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.2.1 to 2.2.2.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.2.1...2.2.2)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-17 22:57:21 +00:00
8f1733e493 Updated README 2024-05-27 22:28:19 -04:00
f7a9f7aaab removed 2024-05-27 22:27:49 -04:00
1828342ef2
Merge pull request #164 from craigerl/client_rework
Refactor client and drivers
2024-05-23 12:00:04 -04:00
b317d0eb63 Refactor client and drivers
this patch refactors the client, drivers and client factory
to use the same Protocol mechanism used by the stats collector
to construct the proper client to be used according to
the configuration
2024-05-23 11:38:27 -04:00
63962acfe6
Merge pull request #167 from craigerl/docker-rework
Refactor Dockerfile
2024-05-23 11:37:50 -04:00
44a72e813e
Merge pull request #166 from craigerl/dependabot/pip/requests-2.32.0
Bump requests from 2.31.0 to 2.32.0
2024-05-23 10:59:46 -04:00
afeb11a085 Refactor Dockerfile
This patch reworks the main Dockerfile to do builds for
both the pypi upstream release of aprsd as well as the
github repo branch of aprsd for development.  This eliminates
the need for Dockerfile-dev.

This patch also installs aprsd as a user in the container image
instead of as root.
2024-05-23 10:58:46 -04:00
dependabot[bot]
18fb2a9e2b
---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-21 05:54:04 +00:00
fa2d2d965d updated requirements 2024-05-18 11:20:05 -04:00
2abf8bc750 Use newer python -m build to build aprsd wheel
This patch changes the Makefile to make use of the
more modern mechanism in python to build a package
and wheel.
2024-05-18 11:19:10 -04:00
f15974131c Eliminate need for PBR
This patch also removes the setup.cfg and replaces it with
the pyproject.toml.

This also renames the dev-requirements.txt to requirements-dev.txt

To install dev
pip install -e ".[dev]"
2024-05-18 11:19:07 -04:00
4d1dfadbde
Merge pull request #163 from craigerl/dependabot/pip/jinja2-3.1.4
Bump jinja2 from 3.1.3 to 3.1.4
2024-05-07 20:01:37 -04:00
93a9cce0c0 Put an upper bound on the QueueHandler queue
This patch overrides the base QueueHandler class
from logging to ensure that the queue doesn't grow
infinitely.  That can be a problem when there is
no consumer pulling items out of the queue.
the queue is now capped at 200 entries max.
2024-05-07 20:00:17 -04:00
dependabot[bot]
321260ff7a
Bump jinja2 from 3.1.3 to 3.1.4
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-06 20:55:03 +00:00
cb2a3441b4 Updated Changelog for 3.4.0 2024-04-29 09:38:47 -04:00
fc9ab4aa74 Change setup.h 2024-04-24 19:36:15 -04:00
a5680a7cbb Fixed docker setup.sh comparison 2024-04-24 19:11:59 -04:00
c4b17eee9d Fixed unit tests failing with WatchList 2024-04-24 16:27:40 -04:00
63f3de47b7 Added config enable_packet_logging
If you want to disable the logging of packets to the log file, set this
new common config option to False
2024-04-24 13:57:24 -04:00
c206f52a76 Make all the Objectstore children use the same lock
This patch updates the ObjectStore and it's child classes
all use the same lock.
2024-04-24 13:53:23 -04:00
2b2bf6c92d Fixed PacketTrack with UnknownPacket
This patch fixes an issue with rx() for an UnknownPacket type
trying to access ackMsgNo (reply ack)
2024-04-24 10:45:47 -04:00
992485e9c7 Removed the requirement on click-completion
This was an older way to do command line completion with
click.  Now we use the built in completion with click itself.
click.shell_completion
2024-04-23 16:14:29 -04:00
f02db20c3e Update Dockerfiles
this patch changes the entrypoint and commands to be in line
with how Docker defines their usage.  this allows the admin using
this container to specify which command to run in the
docker-compose.yml if they want to run something other than the
aprsd server command.

This now allows to easily run webchat as a container :)!
2024-04-23 09:38:37 -04:00
09b97086bc Added fox for entry_points with old python 2024-04-21 12:41:19 -04:00
c43652dbea Added config for enable_seen_list
This patch allows the admin to disable the callsign seen list
packet tracking feature.
2024-04-20 19:54:02 -04:00
29d97d9f0c Fix APRSDStats start_time 2024-04-20 17:07:48 -04:00
813bc7ea29 Added default_packet_send_count config
This allows you to configure how many times a non ACK packet
will be sent before giving up.
2024-04-19 15:59:55 -04:00
bef32059f4 Call packet collecter after prepare during tx.
We have to call the packet collector.tx() only after
a packet has been prepared for tx, because that's when the
new msgNo is assigned.
2024-04-19 13:02:58 -04:00
717db6083e Added PacketTrack to packet collector
Now the PacketTrack object is a packet collector as well.
2024-04-17 16:54:08 -04:00
4c7e27c88b Webchat Send Beacon uses Path selected in UI
This patch changes the Send Beacon button capability in
webchat to use the path selected in the UI for the
actual beacon being sent out.
2024-04-17 12:34:01 -04:00
88d26241f5 Added try except blocks in collectors
This patch adds some try except blocks in both the stats collector
and the packets collector calls to registered objects.  This can
prevent the rest of APRSD falling down when the collector objects
have a failure of some sort.
2024-04-17 12:24:56 -04:00
27359d61aa Remove error logs from watch list 2024-04-17 09:01:49 -04:00
7541f13174 Fixed issue with PacketList being empty 2024-04-16 23:12:58 -04:00
a656d93263 Added new PacketCollector
this patch adds the new PacketCollector class.
It's a single point for collecting information about
packets sent and recieved from the APRS client.
Basically instead of having the packetlist call the seen list
when we get a packet, we simply call the PacketCollector.rx(),
which in turn calls each registered PacketMonitor class.

This allows us to decouple the packet stats like classses inside
of APRSD.  More importantly, it allows extensions to append their
own PacketMonitor class to the chain without modifying ARPSD.
2024-04-16 14:34:14 -04:00
cb0cfeea0b Fixed Keepalive access to email stats
this patch fixes a potential issue accessing an email stat
that might not be set yet.
2024-04-16 13:09:33 -04:00
8d86764c23 Added support for RX replyacks
This patch adds support for processing incoming packets that have
the 'new' acks embedded in messages called replyacks as described here:

http://www.aprs.org/aprs11/replyacks.txt
2024-04-16 11:39:46 -04:00
dc4879a367 Changed Stats Collector registration
This patch changes the stats Collector object registration
to take a class name instead of an object.   This allows the
app to start up and fetch the configuration correctly so that
when objects are created the CONF has the proper values.
This is so singleton objects can assign settings values at
creation time.
2024-04-16 11:06:38 -04:00
4542c0a643 Added PacketList.set_maxlen()
If you want a constructor time member to have a
value you have to set it after the stats collector
registration is done because it will only be the default
since the CONF isn't setup at that point yet.
2024-04-15 21:43:01 -04:00
3e8716365e another fix for tx send 2024-04-15 11:29:26 -04:00
758ea432ed removed Packet.last_send_attempt and just use send_count 2024-04-15 10:00:35 -04:00
1c9f25a3b3 Fix access to PacketList._maxlen 2024-04-15 09:19:05 -04:00
7c935345e5 added packet_count in packet_list stats 2024-04-15 08:34:45 -04:00
c2f8af06bc force uwsgi to 2.0.24 2024-04-14 20:27:26 -04:00
5b2a59fae3 ismall update 2024-04-14 14:08:46 -04:00
8392d6b8ef Added new config optons for PacketList
This allows the admin to set the number of packets to store
in the PacketList object for tracking.  For apps like IRC,
we need to store lots more packets to detect dupes.
2024-04-14 12:48:09 -04:00
1a7694e7e2 Update requirements 2024-04-13 10:41:49 -04:00
f2d39e5fd2 Added threads chart to admin ui graphs 2024-04-12 17:43:11 -04:00
3bd7adda44 set packetlist max back to 100 2024-04-12 17:17:53 -04:00
91ba6d10ce ensure thread count is updated 2024-04-12 17:03:10 -04:00
c6079f897d Added threads table in the admin web ui 2024-04-12 16:33:52 -04:00
66e4850353 Fixed issue with APRSDThreadList stats()
the stats method was setting the key to the classname
and not the thread name, giving an inacurate list
of actual running threads.
2024-04-12 15:08:39 -04:00
40c028c844 Added new default_ack_send_count config option
There may be applications where the admin might not want a hard
coded 3 acks sent for every RX'd packet.  This patch adds the
ability to change the number of acks sent per RX'd packet.
The default is still 3.
2024-04-12 14:36:27 -04:00
4c2a40b7a7 Remove packet from tracker after max attempts 2024-04-12 11:12:57 -04:00
f682890ef0 Limit packets to 50 in PacketList 2024-04-12 09:01:57 -04:00
026dc6e376 syncronize the add for StatsStore 2024-04-11 22:55:01 -04:00
f59b65d13c Lock on stats for PacketList 2024-04-11 22:24:02 -04:00
5ff62c9bdf Fixed PacketList maxlen
This patch removes the MutableMapping from PacketList
and fixes the code that keeps the max packets in the internal
dict.
2024-04-11 21:40:43 -04:00
5fa4eaf909 Fixed a problem with the webchat tab notification
Somehow the hidden div for the webchat interface's
tab notification was removed.  this patch adds it back in
so the user knows that they have message(s) for a tab that
isn't selected
2024-04-11 18:11:05 -04:00
f34120c2df Another fix for ACK packets 2024-04-11 17:28:47 -04:00
3bef1314f8 Fix issue not tracking RX Ack packets for stats
This patch updates the RX tracking for packets.  Every
packet we get into the rx thread, we now will track
every packet we RX so the stats are acurate.
2024-04-11 16:54:46 -04:00
94f36e0aad Fix time plugin
This patch adds the tzlocal package to help find the local timezone
correctly such that pytz can correctly built the time needed for
the time plugin.
2024-04-10 22:03:29 -04:00
Craig Lamparter
886ad9be09
add GATE route to webchat along with WIDE1, etc 2024-04-10 13:19:46 -07:00
Craig Lamparter
aa6e732935
Update webchat, include GATE route along with WIDE, ARISS, etc 2024-04-10 13:18:24 -07:00
b3889896b9 Get rid of some useless warning logs 2024-04-10 13:59:32 -04:00
8f6f8007f4 Added human_info property to MessagePackets
This patch adds the human_info property to the MessagePacket
object to just return the filtered message_text
2024-04-10 13:58:44 -04:00
2e9cf3ce88 Fixed scrolling problem with new webchat sent msg
The Webchat ui was failing to scroll properly upon sending
a new message from a tab that had a lot of messages already.
2024-04-09 10:07:12 -04:00
8728926bf4 Fix some issues with listen command
The listen command had some older references to some of the
thread modules.  this patch fixes those.
2024-04-09 09:58:59 -04:00
2c5bc6c1f7 Admin interface catch empty stats
This patch adds checks in the admin js to ensure that the
specific stats aren't empty before trying to dereference.
2024-04-09 07:46:06 -04:00
80705cb341 Ensure StatsStore has empty data
This patch ensures that the StatsStore object has a default
empty dict for data.
2024-04-09 06:59:22 -04:00
a839dbd3c5 Ensure latest pip is in docker image
this patch adds a command to update pip in both Dockerfile's
2024-04-08 17:00:42 -04:00
1267a53ec8
Merge pull request #159 from craigerl/stats-rework
Reworked the stats making the rpc server obsolete.
2024-04-08 16:12:16 -04:00
da882b4f9b LOG failed requests post to admin ui 2024-04-08 13:07:15 -04:00
6845d266f2 changed admin web_ip to StrOpt
The option was an IPOpt, which prevented the user
from setting the ip to a hostname
2024-04-08 12:47:17 -04:00
db2fbce079 Updated prism to 1.29 2024-04-08 10:26:54 -04:00
bc3bdc48d2 Removed json-viewer 2024-04-08 10:16:08 -04:00
7114269cee Remove rpyc as a requirement 2024-04-05 16:00:45 -04:00
fcc02f29af Delete more stats from webchat
This patch removes some more stats that the webchat
ui doesn't need.
2024-04-05 15:24:11 -04:00
0ca9072c97 Admin UI working again 2024-04-05 15:03:22 -04:00
333feee805 Removed RPC Server and client.
This patch removes the need for the RPC Server from aprsd.

APRSD Now saves it's stats to a pickled file on disk in the
aprsd.conf configured save_location.  The web admin UI
will depickle that file to fetch the stats.  The aprsd server
will periodically pickle and save the stats to disk.

The Logmonitor will not do a url post to the web admin ui
to send it the latest log entries.

Updated the healthcheck app to use the pickled stats file
and the fetch-stats command to make a url request to the running
admin ui to fetch the stats of the remote aprsd server.
2024-04-05 12:50:01 -04:00
a8d56a9967 Remove the logging of the conf password if not set 2024-04-03 18:01:11 -04:00
50e491bab4 Lock around client reset
We now have multiple places where we call reset in case
a network connection fails, so now there is a mutex lock
around the reset method.
2024-04-02 18:23:37 -04:00
71d72adf06 Allow stats collector to serialize upon creation
This does some cleanup with the stats collector and
usage of the stats.  The patch adds a new optional
param to the collector's collect() method to tell
the object to provide serializable stats.  This is
used for the webchat app that sends stats to the
browser.
2024-04-02 14:07:37 -04:00
e2e58530b2 Fixed issues with watch list at startup 2024-04-02 09:30:45 -04:00
01cd0a0327 Fixed access to log_monitor 2024-04-02 09:30:45 -04:00
f92b2ee364 Got unit tests working again 2024-04-02 09:30:45 -04:00
a270c75263 Fixed pep8 errors and missing files 2024-04-02 09:30:45 -04:00
bd005f628d Reworked the stats making the rpc server obsolete.
This patch implements a new stats collector paradigm
which uses the typing Protocol.  Any object that wants to
supply stats to the collector has to implement the
aprsd.stats.collector.StatsProducer protocol, which at the
current time is implementing a stats() method on the object.

Then register the stats singleton producer with the collector by
calling collector.Collector().register_producer()

This only works if the stats producer object is a singleton.
2024-04-02 09:30:43 -04:00
200944f37a
Merge pull request #158 from craigerl/client-update
Update client.py to add consumer in the API.
2024-04-02 09:26:30 -04:00
a62e490353 Update client.py to add consumer in the API.
This adds a layer between the client object and the
actual client instance, so we can reset the actual
client object instance upon failure of connection.
2024-03-28 16:51:56 -04:00
428edaced9 Fix for sample-config warning
This patch fixes a small issue with the sample-config command
outputting a warning during generation.
2024-03-27 10:29:30 -04:00
8f588e653d update requirements 2024-03-25 09:47:16 -04:00
144ad34ae5
Merge pull request #154 from craigerl/packet_updates
Packet updates
2024-03-25 09:20:35 -04:00
0321cb6cf1 Put packet.json back in 2024-03-23 21:06:20 -04:00
c0623596cd Change debug log color
this patch changes the debug log color from dark blue to grey
2024-03-23 19:27:23 -04:00
f400c6004e Fix for filtering curse words
This patch adds a fix for filtering out curse words.
This adds a flag to the regex to ignore case!
2024-03-23 18:02:01 -04:00
873fc06608 added packet counter random int
The packet counter now starts at a random number between 1 and 9999
instead of always at 1.
2024-03-23 17:56:49 -04:00
f53df24988 More packet cleanup and tests 2024-03-23 17:05:41 -04:00
f4356e4a20 Show comment in multiline packet output
This patch adds the comment for a packet if it exists
in the multiline log output
2024-03-23 13:00:51 -04:00
c581dc5020 Added new config option log_packet_format
This new DEFAULT group option specifies what format to use
when logging a packet.
2024-03-23 11:50:01 -04:00
da7b7124d7 Some packet cleanup 2024-03-23 10:54:10 -04:00
9e26df26d6 Added new webchat config option for logging
This patch adds a new config option for the webchat command
to disable url request logging.
2024-03-23 10:46:17 -04:00
b461231c00 Fix some pep8 issues 2024-03-23 10:24:02 -04:00
1e6c483002 Completely redo logging of packets!!
refactored all logging of packets.

Packet class now doesn't do logging.
the format of the packet log now lives on a single line with
colors.

Created a new packet property called human_info, which
creates a string for the payload of each packet type
in a human readable format.

TODO: need to create a config option to allow showing the
older style of multiline logs for packets.
2024-03-22 23:20:16 -04:00
127d3b3f26 Fixed some logging in webchat 2024-03-22 23:19:54 -04:00
f450238348 Added missing packet types in listen command
This patch adds some missing packet objects for the
listen command.  Also moves the keepalive startup
a little later
2024-03-22 23:18:47 -04:00
9858955d34 Don't call stats so often in webchat 2024-03-22 23:16:00 -04:00
e386e91f6e Eliminated need for from_aprslib_dict
This patch eliminates the need for a custom
static method on each Packetclass to convert an aprslib
raw decoded dictionary -> correct Packet class.

This now uses the built in dataclasses_json from_dict()
mixin with an override for both the WeatherPacket and
the ThirdPartyPacket.

This patch also adds the TelemetryPacket and adds some
missing members to a few of the classes from test runs
decoding all packets from APRS-IS -> Packet classes.

Also adds some verification for packets in test_packets
2024-03-20 21:46:43 -04:00
386d2bea62 Fix for micE packet decoding with mbits 2024-03-20 16:12:18 -04:00
eada5e9ce2 updated dev-requirements 2024-03-20 15:52:01 -04:00
00e185b4e7 Fixed some tox errors related to mypy 2024-03-20 15:41:29 -04:00
1477e61b0f Refactored packets
this patch removes the need for dacite2 package for creating
packet objects from the aprslib decoded packet dictionary.

moved the factory method from the base Packet object
to the core module.
2024-03-20 15:41:25 -04:00
6f1d6b4122 removed print 2024-03-20 15:39:18 -04:00
90f212e6dc small refactor of stats usage in version plugin 2024-03-20 15:39:18 -04:00
9c77ca26be Added type setting on pluging.py for mypy 2024-03-20 15:39:18 -04:00
d80277c9d8 Moved Threads list for mypy
This patch moves the APRSDThreadList to the bottom
of the file so that we can specify the type in the
threads_list member for mypy.
2024-03-20 15:39:18 -04:00
29b4b04eee No need to synchronize on stats
this patch updates the stats object to remove the synchronize
on calling stats.  each property on the stats object are already
synchronized.
2024-03-20 15:39:18 -04:00
12dab284cb Start to add types 2024-03-20 15:39:18 -04:00
d0f53c563f Update tox for mypy runs 2024-03-20 15:39:18 -04:00
24830ae810
Merge pull request #155 from craigerl/dependabot/pip/black-24.3.0
Bump black from 24.2.0 to 24.3.0
2024-03-20 15:38:59 -04:00
dependabot[bot]
52896a1c6f
Bump black from 24.2.0 to 24.3.0
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.2.0...24.3.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-20 18:14:03 +00:00
82b3761628 replaced access to conf from uwsgi 2024-03-14 12:15:23 -04:00
8797dfd072 Fixed call to setup_logging in uwsgi 2024-03-14 12:11:30 -04:00
c1acdc2510 Fixed access to conf.log in logging_setup 2024-03-14 11:41:34 -04:00
71cd7e0ab5 Changelog for 3.3.2 2024-03-13 13:49:11 -04:00
d485f484ec Remove warning during sample-config
This patch removes a warning log during sample-config
generation
2024-03-13 13:47:01 -04:00
f810c02d5d Removed print in utils
this patch removes a leftover debug print in utils.load_entry_points
that was causing sample-config output to be bogus.
2024-03-13 13:44:09 -04:00
50e24abb81 Updates for 3.3.1 2024-03-12 10:41:16 -04:00
10d023dd7b Fixed failure with fetch-stats
This patch fails nicely with the fetch-stats if it can't connect
with the rpc server on the other end.
2024-03-12 10:37:17 -04:00
cb9456b29d Fixed problem with list-plugins
This patch includes a fix to the list-plugins and
list-extensions commands.
2024-03-12 10:36:26 -04:00
122 changed files with 6881 additions and 7213 deletions

View File

@ -43,8 +43,9 @@ jobs:
with:
context: "{{defaultContext}}:docker"
platforms: linux/amd64,linux/arm64
file: ./Dockerfile-dev
file: ./Dockerfile
build-args: |
INSTALL_TYPE=github
BRANCH=${{ steps.extract_branch.outputs.branch }}
BUILDX_QEMU_ENV=true
push: true

View File

@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11"]
python-version: ["3.10", "3.11"]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
@ -53,8 +53,9 @@ jobs:
with:
context: "{{defaultContext}}:docker"
platforms: linux/amd64,linux/arm64
file: ./Dockerfile-dev
file: ./Dockerfile
build-args: |
INSTALL_TYPE=github
BRANCH=${{ steps.branch-name.outputs.current_branch }}
BUILDX_QEMU_ENV=true
push: true

View File

@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11"]
python-version: ["3.10", "3.11"]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}

937
ChangeLog
View File

@ -1,937 +0,0 @@
CHANGES
=======
v3.3.0
------
* sample-config fix
* Fixed registry url post
* Changed processpkt message
* Fixed RegistryThread not sending requests
* use log.setup\_logging
* Disable debug logs for aprslib
* Make registry thread sleep
* Put threads first after date/time
* Replace slow rich logging with loguru
* Updated requirements
* Fixed pep8
* Added list-extensions and updated README.rst
* Change defaults for beacon and registry
* Add log info for Beacon and Registry threads
* fixed frequency\_seconds to IntOpt
* fixed references to conf
* changed the default packet timeout to 5 minutes
* Fixed default service registry url
* fix pep8 failures
* py311 fails in github
* Don't send uptime to registry
* Added sending software string to registry
* add py310 gh actions
* Added the new APRS Registry thread
* Added installing extensions to Docker run
* Cleanup some logs
* Added BeaconPacket
* updated requirements files
* removed some unneeded code
* Added iterator to objectstore
* Added some missing classes to threads
* Added support for loading extensions
* Added location for callsign tabs in webchat
* updated gitignore
* Create codeql.yml
* update github action branchs to v8
* Added Location info on webchat interface
* Updated dev test-plugin command
* Update requirements.txt
* Update for v3.2.3
v3.2.3
------
* Force fortune path during setup test
* added /usr/games to path
* Added fortune to Dockerfile-dev
* Added missing fortune app
* aprsd: main.py: Fix premature return in sample\_config
* Update weather.py because you can't sort icons by penis
* Update weather.py both weather plugins have new Ww regex
* Update weather.py
* Fixed a bug with OWMWeatherPlugin
* Rework Location Plugin
v3.2.2
------
* Update for v3.2.2 release
* Fix for types
* Fix wsgi for prod
* pep8 fixes
* remove python 3.12 from github builds
* Fixed datetime access in core.py
* removed invalid reference to config.py
* Updated requirements
* Reworked the admin graphs
* Test new packet serialization
* Try to localize js libs and css for no internet
* Normalize listen --aprs-login
* Bump werkzeug from 2.3.7 to 3.0.1
* Update INSTALL with new conf files
* Bump urllib3 from 2.0.6 to 2.0.7
v3.2.1
------
* Changelog for 3.2.1
* Update index.html disable form autocomplete
* Update the packet\_dupe\_timeout warning
* Update the webchat paths
* Changed the path option to a ListOpt
* Fixed default path for tcp\_kiss client
* Set a default password for admin
* Fix path for KISS clients
* Added packet\_dupe\_timeout conf
* Add ability to change path on every TX packet
* Make Packet objects hashable
* Bump urllib3 from 2.0.4 to 2.0.6
* Don't process AckPackets as dupes
* Fixed another msgNo int issue
* Fixed issue with packet tracker and msgNO Counter
* Fixed import of Mutablemapping
* pep8 fixes
* rewrote packet\_list and drop dupe packets
* Log a warning on dupe
* Fix for dupe packets
v3.2.0
------
* Update Changelog for 3.2.0
* minor cleanup prior to release
* Webchat: fix input maxlength
* WebChat: cleanup some console.logs
* WebChat: flash a dupe message
* Webchat: Fix issue accessing msg.id
* Webchat: Fix chat css on older browsers
* WebChat: new tab should get focus
* Bump gevent from 23.9.0.post1 to 23.9.1
* Webchat: Fix pep8 errors
* Webchat: Added tab notifications and raw packet
* WebChat: Prevent sending message without callsign
* WebChat: fixed content area scrolling
* Webchat: tweaks to UI for expanding chat
* Webchat: Fixed bug deleteing first tab
* Ensure Keepalive doesn't reset client at startup
* Ensure parse\_delta\_str doesn't puke
* WebChat: Send GPS Beacon working
* webchat: got active tab onclick working
* webchat: set to\_call to value of tab when selected
* Center the webchat input form
* Update index.html to use chat.css
* Deleted webchat mobile pages
* Added close X on webchat tabs
* Reworked webchat with new UI
* Updated the webchat UI to look like iMessage
* Restore previous conversations in webchat
* Remove VIM from Dockerfile
* recreate client during reset()
* updated github workflows
* Updated documentation build
* Removed admin\_web.py
* Removed some RPC server log noise
* Fixed admin page packet date
* RPC Server logs the client IP on failed auth
* Start keepalive thread first
* fixed an issue in the mobile webchat
* Added dupe checkig code to webchat mobile
* click on the div after added
* Webchat suppress to display of dupe messages
* Convert webchat internet urls to local static urls
* Make use of webchat gps config options
* Added new webchat config section
* fixed webchat logging.logformat typeoh
v3.1.3
------
* prep for 3.1.3
* Forcefully allow development webchat flask
v3.1.2
------
* Updated Changelog for 3.1.2
* Added support for ThirdParty packet types
* Disable the Send GPS Beacon button
* Removed adhoc ssl support in webchat
v3.1.1
------
* Updated Changelog for v3.1.1
* Fixed pep8 failures
* re-enable USWeatherPlugin to use mapClick
* Fix sending packets over KISS interface
* Use config web\_ip for running admin ui from module
* remove loop log
* Max out the client reconnect backoff to 5
* Update the Dockerfile
v3.1.0
------
* Changelog updates for v3.1.0
* Use CONF.admin.web\_port for single launch web admin
* Fixed sio namespace registration
* Update Dockerfile-dev to include uwsgi
* Fixed pep8
* change port to 8000
* replacement of flask-socketio with python-socketio
* Change how fetch-stats gets it's defaults
* Ensure fetch-stats ip is a string
* Add info logging for rpc server calls
* updated wsgi config default /config/aprsd.conf
* Added timing after each thread loop
* Update docker bin/admin.sh
* Removed flask-classful from webchat
* Remove flask pinning
* removed linux/arm/v8
* Update master build to include linux/arm/v8
* Update Dockerfile-dev to fix plugin permissions
* update manual build github
* Update requirements for upgraded cryptography
* Added more libs for Dockerfile-dev
* Replace Dockerfile-dev with python3 slim
* Moved logging to log for wsgi.py
* Changed weather plugin regex pattern
* Limit the float values to 3 decimal places
* Fixed rain numbers from aprslib
* Fixed rpc client initialization
* Fix in for aprslib issue #80
* Try and fix Dockerfile-dev
* Fixed pep8 errors
* Populate stats object with threads info
* added counts to the fetch-stats table
* Added the fetch-stats command
* Replace ratelimiter with rush
* Added some utilities to Dockerfile-dev
* add arm64 for manual github build
* Added manual master build
* Update master-build.yml
* Add github manual trigger for master build
* Fixed unit tests for Location plugin
* USe new tox and update githubworkflows
* Updated requirements
* force tox to 4.3.5
* Update github workflows
* Fixed pep8 violation
* Added rpc server for listen
* Update location plugin and reworked requirements
* Fixed .readthedocs.yaml format
* Add .readthedocs.yaml
* Example plugin wrong function
* Ensure conf is imported for threads/tx
* Update Dockerfile to help build cryptography
v3.0.3
------
* Update Changelog to 3.0.3
* cleanup some debug messages
* Fixed loading of plugins for server
* Don't load help plugin for listen command
* Added listen args
* Change listen command plugins
* Added listen.sh for docker
* Update Listen command
* Update Dockerfile
* Add ratelimiting for acks and other packets
v3.0.2
------
* Update Changelog for 3.0.2
* Import RejectPacket
v3.0.1
------
* 3.0.1
* Add support to Reject messages
* Update Docker builds for 3.0.0
v3.0.0
------
* Update Changelog for 3.0.0
* Ensure server command main thread doesn't exit
* Fixed save directory default
* Fixed pep8 failure
* Cleaned up KISS interfaces use of old config
* reworked usage of importlib.metadata
* Added new docs files for 3.0.0
* Removed url option from healthcheck in dev
* Updated Healthcheck to use rpc to call aprsd
* Updated docker/bin/run.sh to use new conf
* Added ObjectPacket
* Update regex processing and regex for plugins
* Change ordering of starting up of server command
* Update documentation and README
* Decouple admin web interface from server command
* Dockerfile now produces aprsd.conf
* Fix some unit tests and loading of CONF w/o file
* Added missing conf
* Removed references to old custom config
* Convert config to oslo\_config
* Added rain formatting unit tests to WeatherPacket
* Fix Rain reporting in WeatherPacket send
* Removed Packet.send()
* Removed watchlist plugins
* Fix PluginManager.get\_plugins
* Cleaned up PluginManager
* Cleaned up PluginManager
* Update routing for weatherpacket
* Fix some WeatherPacket formatting
* Fix pep8 violation
* Add packet filtering for aprsd listen
* Added WeatherPacket encoding
* Updated webchat and listen for queue based RX
* reworked collecting and reporting stats
* Removed unused threading code
* Change RX packet processing to enqueu
* Make tracking objectstores work w/o initializing
* Cleaned up packet transmit class attributes
* Fix packets timestamp to int
* More messaging -> packets cleanup
* Cleaned out all references to messaging
* Added contructing a GPSPacket for sending
* cleanup webchat
* Reworked all packet processing
* Updated plugins and plugin interfaces for Packet
* Started using dataclasses to describe packets
v2.6.1
------
* v2.6.1
* Fixed position report for webchat beacon
* Try and fix broken 32bit qemu builds on 64bit system
* Add unit tests for webchat
* remove armv7 build RUST sucks
* Fix for Collections change in 3.10
v2.6.0
------
* Update workflow again
* Update Dockerfile to 22.04
* Update Dockerfile and build.sh
* Update workflow
* Prep for 2.6.0 release
* Update requirements
* Removed Makefile comment
* Update Makefile for dev vs. run environments
* Added pyopenssl for https for webchat
* change from device-detector to user-agents
* Remove twine from dev-requirements
* Update to latest Makefile.venv
* Refactored threads a bit
* Mark packets as acked in MsgTracker
* remove dev setting for template
* Add GPS beacon to mobile page
* Allow werkzeug for admin interface
* Allow werkzeug for admin interface
* Add support for mobile browsers for webchat
* Ignore callsign case while processing packets
* remove linux/arm/v7 for official builds for now
* added workflow for building specific version
* Allow passing in version to the Dockerfile
* Send GPS Beacon from webchat interface
* specify Dockerfile-dev
* Fixed build.sh
* Build on the source not released aprsd
* Remove email validation
* Add support for building linux/arm/v7
* Remove python 3.7 from docker build github
* Fixed failing unit tests
* change github workflow
* Removed TimeOpenCageDataPlugin
* Dump config with aprsd dev test-plugin
* Updated requirements
* Got webchat working with KISS tcp
* Added click auto\_envvar\_prefix
* Update aprsd thread base class to use queue
* Update packets to use wrapt
* Add remving existing requirements
* Try sending raw APRSFrames to aioax25
* Use new aprsd.callsign as the main callsign
* Fixed access to threads refactor
* Added webchat command
* Moved log.py to logging
* Moved trace.py to utils
* Fixed pep8 errors
* Refactored threads.py
* Refactor utils to directory
* remove arm build for now
* Added rustc and cargo to Dockerfile
* remove linux/arm/v6 from docker platform build
* Only tag master build as master
* Remove docker build from test
* create master-build.yml
* Added container build action
* Update docs on using Docker
* Update dev-requirements pip-tools
* Fix typo in docker-compose.yml
* Fix PyPI scraping
* Allow web interface when running in Docker
* Fix typo on exception
* README formatting fixes
* Bump dependencies to fix python 3.10
* Fixed up config option checking for KISS
* Fix logging issue with log messages
* for 2.5.9
v2.5.9
------
* FIX: logging exceptions
* Updated build and run for rich lib
* update build for 2.5.8
v2.5.8
------
* For 2.5.8
* Removed debug code
* Updated list-plugins
* Renamed virtualenv dir to .aprsd-venv
* Added unit tests for dev test-plugin
* Send Message command defaults to config
v2.5.7
------
* Updated Changelog
* Fixed an KISS config disabled issue
* Fixed a bug with multiple notify plugins enabled
* Unify the logging to file and stdout
* Added new feature to list-plugins command
* more README.rst cleanup
* Updated README examples
v2.5.6
------
* Changelog
* Tightened up the packet logging
* Added unit tests for USWeatherPlugin, USMetarPlugin
* Added test\_location to test LocationPlugin
* Updated pytest output
* Added py39 to tox for tests
* Added NotifyPlugin unit tests and more
* Small cleanup on packet logging
* Reduced the APRSIS connection reset to 2 minutes
* Fixed the NotifyPlugin
* Fixed some pep8 errors
* Add tracing for dev command
* Added python rich library based logging
* Added LOG\_LEVEL env variable for the docker
v2.5.5
------
* Update requirements to use aprslib 0.7.0
* fixed the failure during loading for objectstore
* updated docker build
v2.5.4
------
* Updated Changelog
* Fixed dev command missing initialization
v2.5.3
------
* Fix admin logging tab
v2.5.2
------
* Added new list-plugins command
* Don't require check-version command to have a config
* Healthcheck command doesn't need the aprsd.yml config
* Fix test failures
* Removed requirement for aprs.fi key
* Updated Changelog
v2.5.1
------
* Removed stock plugin
* Removed the stock plugin
v2.5.0
------
* Updated for v2.5.0
* Updated Dockerfile's and build script for docker
* Cleaned up some verbose output & colorized output
* Reworked all the common arguments
* Fixed test-plugin
* Ensure common params are honored
* pep8
* Added healthcheck to the cmds
* Removed the need for FROMCALL in dev test-plugin
* Pep8 failures
* Refactor the cli
* Updated Changelog for 4.2.3
* Fixed a problem with send-message command
v2.4.2
------
* Updated Changelog
* Be more careful picking data to/from disk
* Updated Changelog
v2.4.1
------
* Ensure plugins are last to be loaded
* Fixed email connecting to smtp server
v2.4.0
------
* Updated Changelog for 2.4.0 release
* Converted MsgTrack to ObjectStoreMixin
* Fixed unit tests
* Make sure SeenList update has a from in packet
* Ensure PacketList is initialized
* Added SIGTERM to signal\_handler
* Enable configuring where to save the objectstore data
* PEP8 cleanup
* Added objectstore Mixin
* Added -num option to aprsd-dev test-plugin
* Only call stop\_threads if it exists
* Added new SeenList
* Added plugin version to stats reporting
* Added new HelpPlugin
* Updated aprsd-dev to use config for logfile format
* Updated build.sh
* removed usage of config.check\_config\_option
* Fixed send-message after config/client rework
* Fixed issue with flask config
* Added some server startup info logs
* Increase email delay to +10
* Updated dev to use plugin manager
* Fixed notify plugins
* Added new Config object
* Fixed email plugin's use of globals
* Refactored client classes
* Refactor utils usage
* 2.3.1 Changelog
v2.3.1
------
* Fixed issue of aprs-is missing keepalive
* Fixed packet processing issue with aprsd send-message
v2.3.0
------
* Prep 2.3.0
* Enable plugins to return message object
* Added enabled flag for every plugin object
* Ensure plugin threads are valid
* Updated Dockerfile to use v2.3.0
* Removed fixed size on logging queue
* Added Logfile tab in Admin ui
* Updated Makefile clean target
* Added self creating Makefile help target
* Update dev.py
* Allow passing in aprsis\_client
* Fixed a problem with the AVWX plugin not working
* Remove some noisy trace in email plugin
* Fixed issue at startup with notify plugin
* Fixed email validation
* Removed values from forms
* Added send-message to the main admin UI
* Updated requirements
* Cleaned up some pep8 failures
* Upgraded the send-message POC to use websockets
* New Admin ui send message page working
* Send Message via admin Web interface
* Updated Admin UI to show KISS connections
* Got TX/RX working with aioax25+direwolf over TCP
* Rebased from master
* Added the ability to use direwolf KISS socket
* Update Dockerfile to use 2.2.1
v2.2.1
------
* Update Changelog for 2.2.1
* Silence some log noise
v2.2.0
------
* Updated Changelog for v2.2.0
* Updated overview image
* Removed Black code style reference
* Removed TXThread
* Added days to uptime string formatting
* Updated select timeouts
* Rebase from master and run gray
* Added tracking plugin processing
* Added threads functions to APRSDPluginBase
* Refactor Message processing and MORE
* Use Gray instead of Black for code formatting
* Updated tox.ini
* Fixed LOG.debug issue in weather plugin
* Updated slack channel link
* Cleanup of the README.rst
* Fixed aprsd-dev
v2.1.0
------
* Prep for v2.1.0
* Enable multiple replies for plugins
* Put in a fix for aprslib parse exceptions
* Fixed time plugin
* Updated the charts Added the packets chart
* Added showing symbol images to watch list
v2.0.0
------
* Updated docs for 2.0.0
* Reworked the notification threads and admin ui
* Fixed small bug with packets get\_packet\_type
* Updated overview images
* Move version string output to top of log
* Add new watchlist feature
* Fixed the Ack thread not resending acks
* reworked the admin ui to use semenatic ui more
* Added messages count to admin messages list
* Add admin UI tabs for charts, messages, config
* Removed a noisy debug log
* Dump out the config during startup
* Added message counts for each plugin
* Bump urllib3 from 1.26.4 to 1.26.5
* Added aprsd version checking
* Updated INSTALL.txt
* Update my callsign
* Update README.rst
* Update README.rst
* Bump urllib3 from 1.26.3 to 1.26.4
* Prep for v1.6.1 release
v1.6.1
------
* Removed debug log for KeepAlive thread
* ignore Makefile.venv
* Reworked Makefile to use Makefile.venv
* Fixed version unit tests
* Updated stats output for KeepAlive thread
* Update Dockerfile-dev to work with startup
* Force all the graphs to 0 minimum
* Added email messages graphs
* Reworked the stats dict output and healthcheck
* Added callsign to the web index page
* Added log config for flask and lnav config file
* Added showing APRS-IS server to stats
* Provide an initial datapoint on rendering index
* Make the index page behind auth
* Bump pygments from 2.7.3 to 2.7.4
* Added acks with messages graphs
* Updated web stats index to show messages and ram usage
* Added aprsd web index page
* Bump lxml from 4.6.2 to 4.6.3
* Bump jinja2 from 2.11.2 to 2.11.3
* Bump urllib3 from 1.26.2 to 1.26.3
* Added log format and dateformat to config file
* Added Dockerfile-dev and updated build.sh
* Require python 3.7 and >
* Added plugin live reload and StockPlugin
* Updated Dockerfile and build.sh
* Updated Dockerfile for multiplatform builds
* Updated Dockerfile for multiplatform builds
* Dockerfile: Make creation of /config quiet failure
* Updated README docs
v1.6.0
------
* 1.6.0 release prep
* Updated path of run.sh for docker build
* Moved docker related stuffs to docker dir
* Removed some noisy debug log
* Bump cryptography from 3.3.1 to 3.3.2
* Wrap another server call with try except
* Wrap all imap calls with try except blocks
* Bump bleach from 3.2.1 to 3.3.0
* EmailThread was exiting because of IMAP timeout, added exceptions for this
* Added memory tracing in keeplive
* Fixed tox pep8 failure for trace
* Added tracing facility
* Fixed email login issue
* duplicate email messages from RF would generate usage response
* Enable debug logging for smtp and imap
* more debug around email thread
* debug around EmailThread hanging or vanishing
* Fixed resend email after config rework
* Added flask messages web UI and basic auth
* Fixed an issue with LocationPlugin
* Cleaned up the KeepAlive output
* updated .gitignore
* Added healthcheck app
* Add flask and flask\_classful reqs
* Added Flask web thread and stats collection
* First hack at flask
* Allow email to be disabled
* Reworked the config file and options
* Updated documentation and config output
* Fixed extracting lat/lon
* Added openweathermap weather plugin
* Added new time plugins
* Fixed TimePlugin timezone issue
* remove fortune white space
* fix git with install.txt
* change query char from ? to !
* Updated readme to include readthedocs link
* Added aprsd-dev plugin test cli and WxPlugin
v1.5.1
------
* Updated Changelog for v1.5.1
* Updated README to fix pypi page
* Update INSTALL.txt
v1.5.0
------
* Updated Changelog for v1.5.0 release
* Fix tox tests
* fix usage statement
* Enabled some emailthread messages and added timestamp
* Fixed main server client initialization
* test plugin expect responses update to match query output
* Fixed the queryPlugin unit test
* Removed flask code
* Changed default log level to INFO
* fix plugin tests to expect new strings
* fix query command syntax ?, ?3, ?d(elete), ?a(ll)
* Fixed latitude reporting in locationPlugin
* get rid of some debug noise from tracker and email delay
* fixed sample-config double print
* make sample config easier to interpret
* Fixed comments
* Added the ability to add comments to the config file
* Updated docker run.sh script
* Added --raw format for sending messages
* Fixed --quiet option
* Added send-message login checking and --no-ack
* Added new config for aprs.fi API Key
* Added a fix for failed logins to APRS-IS
* Fixed unit test for fortune plugin
* Fixed fortune plugin failures
* getting out of git hell with client.py problems
* Extend APRS.IS object to change login string
* Extend APRS.IS object to change login string
* expect different reply from query plugin
* update query plugin to resend last N messages. syntax: ?rN
* Added unit test for QueryPlugin
* Updated MsgTrack restart\_delayed
* refactor Plugin objects to plugins directory
* Updated README with more workflow details
* change query character syntax, don't reply that we're resending stuff
* Added APRSD system diagram to docs
* Disable MX record validation
* Added some more badges to readme files
* Updated build for docs tox -edocs
* switch command characters for query plugin
* Fix broken test
* undo git disaster
* swap Query command characters a bit
* Added Sphinx based documentation
* refactor Plugin objects to plugins directory
* Updated Makefile
* removed double-quote-string-fixer
* Lots of fixes
* Added more pre-commit hook tests
* Fixed email shortcut lookup
* Added Makefile for easy dev setup
* Added Makefile for easy dev setup
* Cleaned out old ack\_dict
* add null reply for send\_email
* Updated README with more workflow details
* backout my patch that broke tox, trying to push to craiger-test branch
* Fixed failures caused by last commit
* don't tell radio emails were sent, ack is enuf
* Updated README to include development env
* Added pre-commit hooks
* Update Changelog for v1.5.0
* Added QueryPlugin resend all delayed msgs or Flush
* Added QueryPlugin
* Added support to save/load MsgTrack on exit/start
* Creation of MsgTrack object and other stuff
* Added FortunePlugin unit test
* Added some plugin unit tests
* reworked threading
* Reworked messaging lib
v1.1.0
------
* Refactored the main process\_packet method
* Update README with version 1.1.0 related info
* Added fix for an unknown packet type
* Ensure fortune is installed
* Updated docker-compose
* Added Changelog
* Fixed issue when RX ack
* Updated the aprsd-slack-plugin required version
* Updated README.rst
* Fixed send-message with email command and others
* Update .gitignore
* Big patch
* Major refactor
* Updated the Dockerfile to use alpine
v1.0.1
------
* Fix unknown characterset emails
* Updated loggin timestamp to include []
* Updated README with a TOC
* Updates for building containers
* Don't use the dirname for the plugin path search
* Reworked Plugin loading
* Updated README with development information
* Fixed an issue with weather plugin
v1.0.0
------
* Rewrote the README.md to README.rst
* Fixed the usage string after plugins introduced
* Created plugin.py for Command Plugins
* Refactor networking and commands
* get rid of some debug statements
* yet another unicode problem, in resend\_email fixed
* reset default email check delay to 60, fix a few comments
* Update tox environment to fix formatting python errors
* fixed fortune. yet another unicode issue, tested in py3 and py2
* lose some logging statements
* completely off urllib now, tested locate/weather in py2 and py3
* add urllib import back until i replace all calls with requests
* cleaned up weather code after switch to requests ... from urllib. works on py2 and py3
* switch from urlib to requests for weather, tested in py3 and py2. still need to update locate, and all other http calls
* imap tags are unicode in py3. .decode tags
* Update INSTALL.txt
* Initial conversion to click
* Reconnect on socket timeout
* clean up code around closed\_socket and reconnect
* Update INSTALL.txt
* Fixed all pep8 errors and some py3 errors
* fix check\_email\_thread to do proper threading, take delay as arg
* found another .decode that didn't include errors='ignore'
* some failed attempts at getting the first txt or html from a multipart message, currently sends the last
* fix parse\_email unicode probs by using body.decode(errors='ignore').. again
* fix parse\_email unicode probs by using body.decode(errors='ignore')
* clean up code around closed\_socket and reconnect
* socket timeout 5 minutes
* Detect closed socket, reconnect, with a bit more grace
* can detect closed socket and reconnect now
* Update INSTALL.txt
* more debugging messages trying to find rare tight loop in main
* Update INSTALL.txt
* main loop went into tight loop, more debug prints
* main loop went into tight loop, added debug print before every continue
* Update INSTALL.txt
* Update INSTALL.txt
* George Carlin profanity filter
* added decaying email check timer which resets with activity
* Fixed all pep8 errors and some py3 errors
* Fixed all pep8 errors and some py3 errors
* Reconnect on socket timeout
* socket reconnect on timeout testing
* socket timeout of 300 instead of 60
* Reconnect on socket timeout
* socket reconnect on timeout testing
* Fixed all pep8 errors and some py3 errors
* fix check\_email\_thread to do proper threading, take delay as arg
* INSTALL.txt for the average person
* fix bugs after beautification and yaml config additions. Convert to sockets. case insensitive commands
* fix INBOX
* Update README.md
* Added tox support
* Fixed SMTP settings
* Created fake\_aprs.py
* select inbox if gmail server
* removed ASS
* Added a try block around imap login
* Added port and fixed telnet user
* Require ~/.aprsd/config.yml
* updated README for install and usage instructions
* added test to ensure shortcuts in config.yml
* added exit if missing config file
* Added reading of a config file
* update readme
* update readme
* sanitize readme
* readme again again
* readme again again
* readme again
* readme
* readme update
* First stab at migrating this to a pytpi repo
* First stab at migrating this to a pytpi repo
* Added password, callsign and host
* Added argparse for cli options
* comments
* Cleaned up trailing whitespace
* add tweaked fuzzyclock
* make tn a global
* Added standard python main()
* tweaks to readme
* drop virtenv on first line
* sanitize readme a bit more
* sanitize readme a bit more
* sanitize readme
* added weather and location 3
* added weather and location 2
* added weather and location
* mapme
* de-localize
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* de-localize
* Update README.md
* Update README.md
* Update aprsd.py
* Add files via upload
* Update README.md
* Update aprsd.py
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Add files via upload
* Initial commit

1194
ChangeLog.md Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
WORKDIR?=.
VENVDIR ?= $(WORKDIR)/.aprsd-venv
VENVDIR ?= $(WORKDIR)/.venv
.DEFAULT_GOAL := help
@ -17,14 +17,19 @@ Makefile.venv:
help: # Help for the Makefile
@egrep -h '\s##\s' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
dev: REQUIREMENTS_TXT = requirements.txt dev-requirements.txt
dev: REQUIREMENTS_TXT = requirements.txt requirements-dev.txt
dev: venv ## Create a python virtual environment for development of aprsd
run: venv ## Create a virtual environment for running aprsd commands
docs: dev
changelog: dev
npm i -g auto-changelog
auto-changelog -l false --sort-commits date -o ChangeLog.md
docs: changelog
m2r --overwrite ChangeLog.md
cp README.rst docs/readme.rst
cp Changelog docs/changelog.rst
mv ChangeLog.rst docs/changelog.rst
tox -edocs
clean: clean-build clean-pyc clean-test clean-dev ## remove all build, test, coverage and Python artifacts
@ -39,7 +44,6 @@ clean-build: ## remove build artifacts
clean-pyc: ## remove Python file artifacts
find . -name '*.pyc' -exec rm -f {} +
find . -name '*.pyo' -exec rm -f {} +
find . -name '*~' -exec rm -f {} +
find . -name '__pycache__' -exec rm -fr {} +
clean-test: ## remove test and coverage artifacts
@ -55,9 +59,9 @@ clean-dev:
test: dev ## Run all the tox tests
tox -p all
build: test ## Make the build artifact prior to doing an upload
build: test changelog ## Make the build artifact prior to doing an upload
$(VENV)/pip install twine
$(VENV)/python3 setup.py sdist bdist_wheel
$(VENV)/python3 -m build
$(VENV)/twine check dist/*
upload: build ## Upload a new version of the plugin
@ -81,8 +85,8 @@ docker-dev: test ## Make a development docker container tagged with hemna6969/a
update-requirements: dev ## Update the requirements.txt and dev-requirements.txt files
rm requirements.txt
rm dev-requirements.txt
rm requirements-dev.txt
touch requirements.txt
touch dev-requirements.txt
touch requirements-dev.txt
$(VENV)/pip-compile --resolver backtracking --annotation-style=line requirements.in
$(VENV)/pip-compile --resolver backtracking --annotation-style=line dev-requirements.in
$(VENV)/pip-compile --resolver backtracking --annotation-style=line requirements-dev.in

View File

@ -11,6 +11,37 @@ ____________________
`APRSD <http://github.com/craigerl/aprsd>`_ is a Ham radio `APRS <http://aprs.org>`_ message command gateway built on python.
Table of Contents
=================
1. `What is APRSD <#what-is-aprsd>`_
2. `APRSD Overview Diagram <#aprsd-overview-diagram>`_
3. `Typical Use Case <#typical-use-case>`_
4. `Installation <#installation>`_
5. `Example Usage <#example-usage>`_
6. `Help <#help>`_
7. `Commands <#commands>`_
- `Configuration <#configuration>`_
- `Server <#server>`_
- `Current List of Built-in Plugins <#current-list-of-built-in-plugins>`_
- `Pypi.org APRSD Installable Plugin Packages <#pypiorg-aprsd-installable-plugin-packages>`_
- `🐍 APRSD Installed 3rd Party Plugins <#aprsd-installed-3rd-party-plugins>`_
- `Send Message <#send-message>`_
- `Send Email (Radio to SMTP Server) <#send-email-radio-to-smtp-server>`_
- `Receive Email (IMAP Server to Radio) <#receive-email-imap-server-to-radio>`_
- `Location <#location>`_
- `Web Admin Interface <#web-admin-interface>`_
8. `Development <#development>`_
- `Building Your Own APRSD Plugins <#building-your-own-aprsd-plugins>`_
9. `Workflow <#workflow>`_
10. `Release <#release>`_
11. `Docker Container <#docker-container>`_
- `Building <#building-1>`_
- `Official Build <#official-build>`_
- `Development Build <#development-build>`_
- `Running the Container <#running-the-container>`_
What is APRSD
=============
APRSD is a python application for interacting with the APRS network and providing
@ -69,6 +100,7 @@ Help
====
::
└─> aprsd -h
Usage: aprsd [OPTIONS] COMMAND [ARGS]...
@ -77,18 +109,19 @@ Help
-h, --help Show this message and exit.
Commands:
check-version Check this version against the latest in pypi.org.
completion Click Completion subcommands
dev Development type subcommands
healthcheck Check the health of the running aprsd server.
list-plugins List the built in plugins available to APRSD.
listen Listen to packets on the APRS-IS Network based on FILTER.
sample-config Generate a sample Config file from aprsd and all...
send-message Send a message to a callsign via APRS_IS.
server Start the aprsd server gateway process.
version Show the APRSD version.
webchat Web based HAM Radio chat program!
check-version Check this version against the latest in pypi.org.
completion Show the shell completion code
dev Development type subcommands
fetch-stats Fetch stats from a APRSD admin web interface.
healthcheck Check the health of the running aprsd server.
list-extensions List the built in plugins available to APRSD.
list-plugins List the built in plugins available to APRSD.
listen Listen to packets on the APRS-IS Network based on FILTER.
sample-config Generate a sample Config file from aprsd and all...
send-message Send a message to a callsign via APRS_IS.
server Start the aprsd server gateway process.
version Show the APRSD version.
webchat Web based HAM Radio chat program!
Commands
@ -145,8 +178,7 @@ look for incomming commands to the callsign configured in the config file
Current list of built-in plugins
======================================
--------------------------------
::
└─> aprsd list-plugins
@ -298,18 +330,21 @@ AND... ping, fortune, time.....
Web Admin Interface
===================
APRSD has a web admin interface that allows you to view the status of the running APRSD server instance.
The web admin interface shows graphs of packet counts, packet types, number of threads running, the latest
packets sent and received, and the status of each of the plugins that are loaded. You can also view the logfile
and view the raw APRSD configuration file.
To start the web admin interface, You have to install gunicorn in your virtualenv that already has aprsd installed.
::
source <path to APRSD's virtualenv>/bin/activate
pip install gunicorn
gunicorn --bind 0.0.0.0:8080 "aprsd.wsgi:app"
aprsd admin --loglevel INFO
The web admin interface will be running on port 8080 on the local machine. http://localhost:8080
Development
===========
@ -318,7 +353,7 @@ Development
* ``make``
Workflow
========
--------
While working aprsd, The workflow is as follows:
@ -347,7 +382,7 @@ While working aprsd, The workflow is as follows:
Release
=======
-------
To do release to pypi:
@ -368,6 +403,29 @@ To do release to pypi:
``make upload``
Building your own APRSD plugins
-------------------------------
APRSD plugins are the mechanism by which APRSD can respond to APRS Messages. The plugins are loaded at server startup
and can also be loaded at listen startup. When a packet is received by APRSD, it is passed to each of the plugins
in the order they were registered in the config file. The plugins can then decide what to do with the packet.
When a plugin is called, it is passed a APRSD Packet object. The plugin can then do something with the packet and
return a reply message if desired. If a plugin does not want to reply to the packet, it can just return None.
When a plugin does return a reply message, APRSD will send the reply message to the appropriate destination.
For example, when a 'ping' message is received, the PingPlugin will return a reply message of 'pong'. When APRSD
receives the 'pong' message, it will be sent back to the original caller of the ping message.
APRSD plugins are simply python packages that can be installed from pypi.org. They are installed into the
aprsd virtualenv and can be imported by APRSD at runtime. The plugins are registered in the config file and loaded
at startup of the aprsd server command or the aprsd listen command.
Overview
--------
You can build your own plugins by following the instructions in the `Building your own APRSD plugins`_ section.
Plugins are called by APRSD when packe
Docker Container
================

View File

@ -10,7 +10,10 @@
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
from importlib.metadata import PackageNotFoundError, version
__version__ = pbr.version.VersionInfo("aprsd").version_string()
try:
__version__ = version("aprsd")
except PackageNotFoundError:
pass

View File

@ -1,8 +1,9 @@
import click
from functools import update_wrapper
import logging
from pathlib import Path
import typing as t
import click
from oslo_config import cfg
import aprsd
@ -58,7 +59,7 @@ class AliasedGroup(click.Group):
Copied from `click` and extended for `aliases`.
"""
def decorator(f):
aliases = kwargs.pop('aliases', [])
aliases = kwargs.pop("aliases", [])
cmd = click.decorators.command(*args, **kwargs)(f)
self.add_command(cmd)
for alias in aliases:
@ -74,7 +75,7 @@ class AliasedGroup(click.Group):
Copied from `click` and extended for `aliases`.
"""
def decorator(f):
aliases = kwargs.pop('aliases', [])
aliases = kwargs.pop("aliases", [])
cmd = click.decorators.group(*args, **kwargs)(f)
self.add_command(cmd)
for alias in aliases:
@ -137,7 +138,7 @@ def process_standard_options_no_config(f: F) -> F:
ctx.obj["loglevel"] = kwargs["loglevel"]
ctx.obj["config_file"] = kwargs["config_file"]
ctx.obj["quiet"] = kwargs["quiet"]
log.setup_logging_no_config(
log.setup_logging(
ctx.obj["loglevel"],
ctx.obj["quiet"],
)

View File

@ -1,348 +0,0 @@
import abc
import logging
import time
import aprslib
from aprslib.exceptions import LoginError
from oslo_config import cfg
from aprsd import exception
from aprsd.clients import aprsis, fake, kiss
from aprsd.packets import core, packet_list
from aprsd.utils import trace
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
TRANSPORT_APRSIS = "aprsis"
TRANSPORT_TCPKISS = "tcpkiss"
TRANSPORT_SERIALKISS = "serialkiss"
TRANSPORT_FAKE = "fake"
# Main must create this from the ClientFactory
# object such that it's populated with the
# Correct config
factory = None
class Client:
"""Singleton client class that constructs the aprslib connection."""
_instance = None
_client = None
connected = False
server_string = None
filter = None
def __new__(cls, *args, **kwargs):
"""This magic turns this into a singleton."""
if cls._instance is None:
cls._instance = super().__new__(cls)
# Put any initialization here.
return cls._instance
def set_filter(self, filter):
self.filter = filter
if self._client:
self._client.set_filter(filter)
@property
def client(self):
if not self._client:
LOG.info("Creating APRS client")
self._client = self.setup_connection()
if self.filter:
LOG.info("Creating APRS client filter")
self._client.set_filter(self.filter)
return self._client
def send(self, packet: core.Packet):
packet_list.PacketList().tx(packet)
self.client.send(packet)
def reset(self):
"""Call this to force a rebuild/reconnect."""
if self._client:
del self._client
else:
LOG.warning("Client not initialized, nothing to reset.")
# Recreate the client
LOG.info(f"Creating new client {self.client}")
@abc.abstractmethod
def setup_connection(self):
pass
@staticmethod
@abc.abstractmethod
def is_enabled():
pass
@staticmethod
@abc.abstractmethod
def transport():
pass
@abc.abstractmethod
def decode_packet(self, *args, **kwargs):
pass
class APRSISClient(Client):
_client = None
@staticmethod
def is_enabled():
# Defaults to True if the enabled flag is non existent
try:
return CONF.aprs_network.enabled
except KeyError:
return False
@staticmethod
def is_configured():
if APRSISClient.is_enabled():
# Ensure that the config vars are correctly set
if not CONF.aprs_network.login:
LOG.error("Config aprs_network.login not set.")
raise exception.MissingConfigOptionException(
"aprs_network.login is not set.",
)
if not CONF.aprs_network.password:
LOG.error("Config aprs_network.password not set.")
raise exception.MissingConfigOptionException(
"aprs_network.password is not set.",
)
if not CONF.aprs_network.host:
LOG.error("Config aprs_network.host not set.")
raise exception.MissingConfigOptionException(
"aprs_network.host is not set.",
)
return True
return True
def is_alive(self):
if self._client:
return self._client.is_alive()
else:
return False
@staticmethod
def transport():
return TRANSPORT_APRSIS
def decode_packet(self, *args, **kwargs):
"""APRS lib already decodes this."""
return core.Packet.factory(args[0])
def setup_connection(self):
user = CONF.aprs_network.login
password = CONF.aprs_network.password
host = CONF.aprs_network.host
port = CONF.aprs_network.port
connected = False
backoff = 1
aprs_client = None
while not connected:
try:
LOG.info("Creating aprslib client")
aprs_client = aprsis.Aprsdis(user, passwd=password, host=host, port=port)
# Force the log to be the same
aprs_client.logger = LOG
aprs_client.connect()
connected = True
backoff = 1
except LoginError as e:
LOG.error(f"Failed to login to APRS-IS Server '{e}'")
connected = False
time.sleep(backoff)
except Exception as e:
LOG.error(f"Unable to connect to APRS-IS server. '{e}' ")
connected = False
time.sleep(backoff)
# Don't allow the backoff to go to inifinity.
if backoff > 5:
backoff = 5
else:
backoff += 1
continue
LOG.debug(f"Logging in to APRS-IS with user '{user}'")
self._client = aprs_client
return aprs_client
class KISSClient(Client):
_client = None
@staticmethod
def is_enabled():
"""Return if tcp or serial KISS is enabled."""
if CONF.kiss_serial.enabled:
return True
if CONF.kiss_tcp.enabled:
return True
return False
@staticmethod
def is_configured():
# Ensure that the config vars are correctly set
if KISSClient.is_enabled():
transport = KISSClient.transport()
if transport == TRANSPORT_SERIALKISS:
if not CONF.kiss_serial.device:
LOG.error("KISS serial enabled, but no device is set.")
raise exception.MissingConfigOptionException(
"kiss_serial.device is not set.",
)
elif transport == TRANSPORT_TCPKISS:
if not CONF.kiss_tcp.host:
LOG.error("KISS TCP enabled, but no host is set.")
raise exception.MissingConfigOptionException(
"kiss_tcp.host is not set.",
)
return True
return False
def is_alive(self):
if self._client:
return self._client.is_alive()
else:
return False
@staticmethod
def transport():
if CONF.kiss_serial.enabled:
return TRANSPORT_SERIALKISS
if CONF.kiss_tcp.enabled:
return TRANSPORT_TCPKISS
def decode_packet(self, *args, **kwargs):
"""We get a frame, which has to be decoded."""
LOG.debug(f"kwargs {kwargs}")
frame = kwargs["frame"]
LOG.debug(f"Got an APRS Frame '{frame}'")
# try and nuke the * from the fromcall sign.
# frame.header._source._ch = False
# payload = str(frame.payload.decode())
# msg = f"{str(frame.header)}:{payload}"
# msg = frame.tnc2
# LOG.debug(f"Decoding {msg}")
raw = aprslib.parse(str(frame))
packet = core.Packet.factory(raw)
if isinstance(packet, core.ThirdParty):
return packet.subpacket
else:
return packet
def setup_connection(self):
self._client = kiss.KISS3Client()
return self._client
class APRSDFakeClient(Client, metaclass=trace.TraceWrapperMetaclass):
@staticmethod
def is_enabled():
if CONF.fake_client.enabled:
return True
return False
@staticmethod
def is_configured():
return APRSDFakeClient.is_enabled()
def is_alive(self):
return True
def setup_connection(self):
return fake.APRSDFakeClient()
@staticmethod
def transport():
return TRANSPORT_FAKE
def decode_packet(self, *args, **kwargs):
LOG.debug(f"kwargs {kwargs}")
pkt = kwargs["packet"]
LOG.debug(f"Got an APRS Fake Packet '{pkt}'")
return pkt
class ClientFactory:
_instance = None
def __new__(cls, *args, **kwargs):
"""This magic turns this into a singleton."""
if cls._instance is None:
cls._instance = super().__new__(cls)
# Put any initialization here.
return cls._instance
def __init__(self):
self._builders = {}
def register(self, key, builder):
self._builders[key] = builder
def create(self, key=None):
if not key:
if APRSISClient.is_enabled():
key = TRANSPORT_APRSIS
elif KISSClient.is_enabled():
key = KISSClient.transport()
elif APRSDFakeClient.is_enabled():
key = TRANSPORT_FAKE
builder = self._builders.get(key)
LOG.debug(f"Creating client {key}")
if not builder:
raise ValueError(key)
return builder()
def is_client_enabled(self):
"""Make sure at least one client is enabled."""
enabled = False
for key in self._builders.keys():
try:
enabled |= self._builders[key].is_enabled()
except KeyError:
pass
return enabled
def is_client_configured(self):
enabled = False
for key in self._builders.keys():
try:
enabled |= self._builders[key].is_configured()
except KeyError:
pass
except exception.MissingConfigOptionException as ex:
LOG.error(ex.message)
return False
except exception.ConfigOptionBogusDefaultException as ex:
LOG.error(ex.message)
return False
return enabled
@staticmethod
def setup():
"""Create and register all possible client objects."""
global factory
factory = ClientFactory()
factory.register(TRANSPORT_APRSIS, APRSISClient)
factory.register(TRANSPORT_TCPKISS, KISSClient)
factory.register(TRANSPORT_SERIALKISS, KISSClient)
factory.register(TRANSPORT_FAKE, APRSDFakeClient)

13
aprsd/client/__init__.py Normal file
View File

@ -0,0 +1,13 @@
from aprsd.client import aprsis, factory, fake, kiss
TRANSPORT_APRSIS = "aprsis"
TRANSPORT_TCPKISS = "tcpkiss"
TRANSPORT_SERIALKISS = "serialkiss"
TRANSPORT_FAKE = "fake"
client_factory = factory.ClientFactory()
client_factory.register(aprsis.APRSISClient)
client_factory.register(kiss.KISSClient)
client_factory.register(fake.APRSDFakeClient)

135
aprsd/client/aprsis.py Normal file
View File

@ -0,0 +1,135 @@
import datetime
import logging
import time
from aprslib.exceptions import LoginError
from oslo_config import cfg
from aprsd import client, exception
from aprsd.client import base
from aprsd.client.drivers import aprsis
from aprsd.packets import core
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class APRSISClient(base.APRSClient):
_client = None
def __init__(self):
max_timeout = {"hours": 0.0, "minutes": 2, "seconds": 0}
self.max_delta = datetime.timedelta(**max_timeout)
def stats(self) -> dict:
stats = {}
if self.is_configured():
stats = {
"server_string": self._client.server_string,
"sever_keepalive": self._client.aprsd_keepalive,
"filter": self.filter,
}
return stats
@staticmethod
def is_enabled():
# Defaults to True if the enabled flag is non existent
try:
return CONF.aprs_network.enabled
except KeyError:
return False
@staticmethod
def is_configured():
if APRSISClient.is_enabled():
# Ensure that the config vars are correctly set
if not CONF.aprs_network.login:
LOG.error("Config aprs_network.login not set.")
raise exception.MissingConfigOptionException(
"aprs_network.login is not set.",
)
if not CONF.aprs_network.password:
LOG.error("Config aprs_network.password not set.")
raise exception.MissingConfigOptionException(
"aprs_network.password is not set.",
)
if not CONF.aprs_network.host:
LOG.error("Config aprs_network.host not set.")
raise exception.MissingConfigOptionException(
"aprs_network.host is not set.",
)
return True
return True
def _is_stale_connection(self):
delta = datetime.datetime.now() - self._client.aprsd_keepalive
if delta > self.max_delta:
LOG.error(f"Connection is stale, last heard {delta} ago.")
return True
def is_alive(self):
if self._client:
return self._client.is_alive() and not self._is_stale_connection()
else:
LOG.warning(f"APRS_CLIENT {self._client} alive? NO!!!")
return False
def close(self):
if self._client:
self._client.stop()
self._client.close()
@staticmethod
def transport():
return client.TRANSPORT_APRSIS
def decode_packet(self, *args, **kwargs):
"""APRS lib already decodes this."""
return core.factory(args[0])
def setup_connection(self):
user = CONF.aprs_network.login
password = CONF.aprs_network.password
host = CONF.aprs_network.host
port = CONF.aprs_network.port
self.connected = False
backoff = 1
aprs_client = None
while not self.connected:
try:
LOG.info(f"Creating aprslib client({host}:{port}) and logging in {user}.")
aprs_client = aprsis.Aprsdis(user, passwd=password, host=host, port=port)
# Force the log to be the same
aprs_client.logger = LOG
aprs_client.connect()
self.connected = True
backoff = 1
except LoginError as e:
LOG.error(f"Failed to login to APRS-IS Server '{e}'")
self.connected = False
time.sleep(backoff)
except Exception as e:
LOG.error(f"Unable to connect to APRS-IS server. '{e}' ")
self.connected = False
time.sleep(backoff)
# Don't allow the backoff to go to inifinity.
if backoff > 5:
backoff = 5
else:
backoff += 1
continue
self._client = aprs_client
return aprs_client
def consumer(self, callback, blocking=False, immortal=False, raw=False):
try:
self._client.consumer(
callback, blocking=blocking,
immortal=immortal, raw=raw,
)
except Exception as e:
LOG.error(f"Exception in consumer: {e}")

126
aprsd/client/base.py Normal file
View File

@ -0,0 +1,126 @@
import abc
import logging
import threading
from oslo_config import cfg
import wrapt
from aprsd.packets import core
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class APRSClient:
"""Singleton client class that constructs the aprslib connection."""
_instance = None
_client = None
connected = False
filter = None
lock = threading.Lock()
def __new__(cls, *args, **kwargs):
"""This magic turns this into a singleton."""
if cls._instance is None:
cls._instance = super().__new__(cls)
# Put any initialization here.
cls._instance._create_client()
return cls._instance
@abc.abstractmethod
def stats(self) -> dict:
"""Return statistics about the client connection.
Returns:
dict: Statistics about the connection and packet handling
"""
def set_filter(self, filter):
self.filter = filter
if self._client:
self._client.set_filter(filter)
@property
def client(self):
if not self._client:
self._create_client()
return self._client
def _create_client(self):
try:
self._client = self.setup_connection()
if self.filter:
LOG.info("Creating APRS client filter")
self._client.set_filter(self.filter)
except Exception as e:
LOG.error(f"Failed to create APRS client: {e}")
self._client = None
raise
def stop(self):
if self._client:
LOG.info("Stopping client connection.")
self._client.stop()
def send(self, packet: core.Packet) -> None:
"""Send a packet to the network.
Args:
packet: The APRS packet to send
"""
self.client.send(packet)
@wrapt.synchronized(lock)
def reset(self) -> None:
"""Call this to force a rebuild/reconnect."""
LOG.info("Resetting client connection.")
if self._client:
self._client.close()
del self._client
self._create_client()
else:
LOG.warning("Client not initialized, nothing to reset.")
# Recreate the client
LOG.info(f"Creating new client {self.client}")
@abc.abstractmethod
def setup_connection(self):
"""Initialize and return the underlying APRS connection.
Returns:
object: The initialized connection object
"""
@staticmethod
@abc.abstractmethod
def is_enabled():
pass
@staticmethod
@abc.abstractmethod
def transport():
pass
@abc.abstractmethod
def decode_packet(self, *args, **kwargs):
"""Decode raw APRS packet data into a Packet object.
Returns:
Packet: Decoded APRS packet
"""
@abc.abstractmethod
def consumer(self, callback, blocking=False, immortal=False, raw=False):
pass
@abc.abstractmethod
def is_alive(self):
pass
@abc.abstractmethod
def close(self):
pass

View File

@ -1,3 +1,4 @@
import datetime
import logging
import select
import threading
@ -11,7 +12,6 @@ from aprslib.exceptions import (
import wrapt
import aprsd
from aprsd import stats
from aprsd.packets import core
@ -24,13 +24,20 @@ class Aprsdis(aprslib.IS):
# flag to tell us to stop
thread_stop = False
# date for last time we heard from the server
aprsd_keepalive = datetime.datetime.now()
# timeout in seconds
select_timeout = 1
lock = threading.Lock()
def stop(self):
self.thread_stop = True
LOG.info("Shutdown Aprsdis client.")
LOG.warning("Shutdown Aprsdis client.")
def close(self):
LOG.warning("Closing Aprsdis client.")
super().close()
@wrapt.synchronized(lock)
def send(self, packet: core.Packet):
@ -142,7 +149,6 @@ class Aprsdis(aprslib.IS):
self.logger.info(f"Connected to {server_string}")
self.server_string = server_string
stats.APRSDStats().set_aprsis_server(server_string)
except LoginError as e:
self.logger.error(str(e))
@ -176,24 +182,25 @@ class Aprsdis(aprslib.IS):
try:
for line in self._socket_readlines(blocking):
if line[0:1] != b"#":
self.aprsd_keepalive = datetime.datetime.now()
if raw:
callback(line)
else:
callback(self._parse(line))
else:
self.logger.debug("Server: %s", line.decode("utf8"))
stats.APRSDStats().set_aprsis_keepalive()
self.aprsd_keepalive = datetime.datetime.now()
except ParseError as exp:
self.logger.log(
11,
"%s\n Packet: %s",
"%s Packet: '%s'",
exp,
exp.packet,
)
except UnknownFormat as exp:
self.logger.log(
9,
"%s\n Packet: %s",
"%s Packet: '%s'",
exp,
exp.packet,
)

View File

@ -67,7 +67,7 @@ class APRSDFakeClient(metaclass=trace.TraceWrapperMetaclass):
# Generate packets here?
raw = "GTOWN>APDW16,WIDE1-1,WIDE2-1:}KM6LYW-9>APZ100,TCPIP,GTOWN*::KM6LYW :KM6LYW: 19 Miles SW"
pkt_raw = aprslib.parse(raw)
pkt = core.Packet.factory(pkt_raw)
pkt = core.factory(pkt_raw)
callback(packet=pkt)
LOG.debug(f"END blocking FAKE consumer {self}")
time.sleep(8)

View File

@ -81,7 +81,7 @@ class KISS3Client:
LOG.error("Failed to parse bytes received from KISS interface.")
LOG.exception(ex)
def consumer(self, callback, blocking=False, immortal=False, raw=False):
def consumer(self, callback):
LOG.debug("Start blocking KISS consumer")
self._parse_callback = callback
self.kiss.read(callback=self.parse_frame, min_frames=None)

88
aprsd/client/factory.py Normal file
View File

@ -0,0 +1,88 @@
import logging
from typing import Callable, Protocol, runtime_checkable
from aprsd import exception
from aprsd.packets import core
LOG = logging.getLogger("APRSD")
@runtime_checkable
class Client(Protocol):
def __init__(self):
pass
def connect(self) -> bool:
pass
def disconnect(self) -> bool:
pass
def decode_packet(self, *args, **kwargs) -> type[core.Packet]:
pass
def is_enabled(self) -> bool:
pass
def is_configured(self) -> bool:
pass
def transport(self) -> str:
pass
def send(self, message: str) -> bool:
pass
def setup_connection(self) -> None:
pass
class ClientFactory:
_instance = None
clients = []
def __new__(cls, *args, **kwargs):
"""This magic turns this into a singleton."""
if cls._instance is None:
cls._instance = super().__new__(cls)
# Put any initialization here.
return cls._instance
def __init__(self):
self.clients: list[Callable] = []
def register(self, aprsd_client: Callable):
if isinstance(aprsd_client, Client):
raise ValueError("Client must be a subclass of Client protocol")
self.clients.append(aprsd_client)
def create(self, key=None):
for client in self.clients:
if client.is_enabled():
return client()
raise Exception("No client is configured!!")
def is_client_enabled(self):
"""Make sure at least one client is enabled."""
enabled = False
for client in self.clients:
if client.is_enabled():
enabled = True
return enabled
def is_client_configured(self):
enabled = False
for client in self.clients:
try:
if client.is_configured():
enabled = True
except exception.MissingConfigOptionException as ex:
LOG.error(ex.message)
return False
except exception.ConfigOptionBogusDefaultException as ex:
LOG.error(ex.message)
return False
return enabled

48
aprsd/client/fake.py Normal file
View File

@ -0,0 +1,48 @@
import logging
from oslo_config import cfg
from aprsd import client
from aprsd.client import base
from aprsd.client.drivers import fake as fake_driver
from aprsd.utils import trace
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class APRSDFakeClient(base.APRSClient, metaclass=trace.TraceWrapperMetaclass):
def stats(self) -> dict:
return {}
@staticmethod
def is_enabled():
if CONF.fake_client.enabled:
return True
return False
@staticmethod
def is_configured():
return APRSDFakeClient.is_enabled()
def is_alive(self):
return True
def close(self):
pass
def setup_connection(self):
self.connected = True
return fake_driver.APRSDFakeClient()
@staticmethod
def transport():
return client.TRANSPORT_FAKE
def decode_packet(self, *args, **kwargs):
LOG.debug(f"kwargs {kwargs}")
pkt = kwargs["packet"]
LOG.debug(f"Got an APRS Fake Packet '{pkt}'")
return pkt

103
aprsd/client/kiss.py Normal file
View File

@ -0,0 +1,103 @@
import logging
import aprslib
from oslo_config import cfg
from aprsd import client, exception
from aprsd.client import base
from aprsd.client.drivers import kiss
from aprsd.packets import core
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class KISSClient(base.APRSClient):
_client = None
def stats(self) -> dict:
stats = {}
if self.is_configured():
return {
"transport": self.transport(),
}
return stats
@staticmethod
def is_enabled():
"""Return if tcp or serial KISS is enabled."""
if CONF.kiss_serial.enabled:
return True
if CONF.kiss_tcp.enabled:
return True
return False
@staticmethod
def is_configured():
# Ensure that the config vars are correctly set
if KISSClient.is_enabled():
transport = KISSClient.transport()
if transport == client.TRANSPORT_SERIALKISS:
if not CONF.kiss_serial.device:
LOG.error("KISS serial enabled, but no device is set.")
raise exception.MissingConfigOptionException(
"kiss_serial.device is not set.",
)
elif transport == client.TRANSPORT_TCPKISS:
if not CONF.kiss_tcp.host:
LOG.error("KISS TCP enabled, but no host is set.")
raise exception.MissingConfigOptionException(
"kiss_tcp.host is not set.",
)
return True
return False
def is_alive(self):
if self._client:
return self._client.is_alive()
else:
return False
def close(self):
if self._client:
self._client.stop()
@staticmethod
def transport():
if CONF.kiss_serial.enabled:
return client.TRANSPORT_SERIALKISS
if CONF.kiss_tcp.enabled:
return client.TRANSPORT_TCPKISS
def decode_packet(self, *args, **kwargs):
"""We get a frame, which has to be decoded."""
LOG.debug(f"kwargs {kwargs}")
frame = kwargs["frame"]
LOG.debug(f"Got an APRS Frame '{frame}'")
# try and nuke the * from the fromcall sign.
# frame.header._source._ch = False
# payload = str(frame.payload.decode())
# msg = f"{str(frame.header)}:{payload}"
# msg = frame.tnc2
# LOG.debug(f"Decoding {msg}")
raw = aprslib.parse(str(frame))
packet = core.factory(raw)
if isinstance(packet, core.ThirdPartyPacket):
return packet.subpacket
else:
return packet
def setup_connection(self):
self._client = kiss.KISS3Client()
self.connected = True
return self._client
def consumer(self, callback, blocking=False, immortal=False, raw=False):
self._client.consumer(callback)

38
aprsd/client/stats.py Normal file
View File

@ -0,0 +1,38 @@
import threading
from oslo_config import cfg
import wrapt
from aprsd import client
from aprsd.utils import singleton
CONF = cfg.CONF
@singleton
class APRSClientStats:
lock = threading.Lock()
@wrapt.synchronized(lock)
def stats(self, serializable=False):
cl = client.client_factory.create()
stats = {
"transport": cl.transport(),
"filter": cl.filter,
"connected": cl.connected,
}
if cl.transport() == client.TRANSPORT_APRSIS:
stats["server_string"] = cl.client.server_string
keepalive = cl.client.aprsd_keepalive
if serializable:
keepalive = keepalive.isoformat()
stats["server_keepalive"] = keepalive
elif cl.transport() == client.TRANSPORT_TCPKISS:
stats["host"] = CONF.kiss_tcp.host
stats["port"] = CONF.kiss_tcp.port
elif cl.transport() == client.TRANSPORT_SERIALKISS:
stats["device"] = CONF.kiss_serial.device
return stats

57
aprsd/cmds/admin.py Normal file
View File

@ -0,0 +1,57 @@
import logging
import os
import signal
import click
from oslo_config import cfg
import socketio
import aprsd
from aprsd import cli_helper
from aprsd import main as aprsd_main
from aprsd import utils
from aprsd.main import cli
os.environ["APRSD_ADMIN_COMMAND"] = "1"
# this import has to happen AFTER we set the
# above environment variable, so that the code
# inside the wsgi.py has the value
from aprsd import wsgi as aprsd_wsgi # noqa
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
# main() ###
@cli.command()
@cli_helper.add_options(cli_helper.common_options)
@click.pass_context
@cli_helper.process_standard_options
def admin(ctx):
"""Start the aprsd admin interface."""
signal.signal(signal.SIGINT, aprsd_main.signal_handler)
signal.signal(signal.SIGTERM, aprsd_main.signal_handler)
level, msg = utils._check_version()
if level:
LOG.warning(msg)
else:
LOG.info(msg)
LOG.info(f"APRSD Started version: {aprsd.__version__}")
# Dump all the config options now.
CONF.log_opt_values(LOG, logging.DEBUG)
async_mode = "threading"
sio = socketio.Server(logger=True, async_mode=async_mode)
aprsd_wsgi.app.wsgi_app = socketio.WSGIApp(sio, aprsd_wsgi.app.wsgi_app)
aprsd_wsgi.init_app()
sio.register_namespace(aprsd_wsgi.LoggingNamespace("/logs"))
CONF.log_opt_values(LOG, logging.DEBUG)
aprsd_wsgi.app.run(
threaded=True,
debug=False,
port=CONF.admin.web_port,
host=CONF.admin.web_ip,
)

View File

@ -1,5 +1,5 @@
import click
import click_completion
import click.shell_completion
from aprsd.main import cli
@ -7,30 +7,16 @@ from aprsd.main import cli
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
@cli.group(help="Click Completion subcommands", context_settings=CONTEXT_SETTINGS)
@click.pass_context
def completion(ctx):
pass
@cli.command()
@click.argument("shell", type=click.Choice(list(click.shell_completion._available_shells)))
def completion(shell):
"""Show the shell completion code"""
from click.utils import _detect_program_name
# show dumps out the completion code for a particular shell
@completion.command(help="Show completion code for shell", name="show")
@click.option("-i", "--case-insensitive/--no-case-insensitive", help="Case insensitive completion")
@click.argument("shell", required=False, type=click_completion.DocumentedChoice(click_completion.core.shells))
def show(shell, case_insensitive):
"""Show the click-completion-command completion code"""
extra_env = {"_CLICK_COMPLETION_COMMAND_CASE_INSENSITIVE_COMPLETE": "ON"} if case_insensitive else {}
click.echo(click_completion.core.get_code(shell, extra_env=extra_env))
# install will install the completion code for a particular shell
@completion.command(help="Install completion code for a shell", name="install")
@click.option("--append/--overwrite", help="Append the completion code to the file", default=None)
@click.option("-i", "--case-insensitive/--no-case-insensitive", help="Case insensitive completion")
@click.argument("shell", required=False, type=click_completion.DocumentedChoice(click_completion.core.shells))
@click.argument("path", required=False)
def install(append, case_insensitive, shell, path):
"""Install the click-completion-command completion"""
extra_env = {"_CLICK_COMPLETION_COMMAND_CASE_INSENSITIVE_COMPLETE": "ON"} if case_insensitive else {}
shell, path = click_completion.core.install(shell=shell, path=path, append=append, extra_env=extra_env)
click.echo(f"{shell} completion installed in {path}")
cls = click.shell_completion.get_completion_class(shell)
prog_name = _detect_program_name()
complete_var = f"_{prog_name}_COMPLETE".replace("-", "_").upper()
print(cls(cli, {}, prog_name, complete_var).source())
print("# Add the following line to your shell configuration file to have aprsd command line completion")
print("# but remove the leading '#' character.")
print(f"# eval \"$(aprsd completion {shell})\"")

View File

@ -8,8 +8,9 @@ import logging
import click
from oslo_config import cfg
from aprsd import cli_helper, conf, packets, plugin
# local imports here
from aprsd import cli_helper, client, conf, packets, plugin
from aprsd.client import base
from aprsd.main import cli
from aprsd.utils import trace
@ -96,7 +97,7 @@ def test_plugin(
if CONF.trace_enabled:
trace.setup_tracing(["method", "api"])
client.Client()
base.APRSClient()
pm = plugin.PluginManager()
if load_all:

View File

@ -1,10 +1,9 @@
# Fetch active stats from a remote running instance of aprsd server
# This uses the RPC server to fetch the stats from the remote server.
# Fetch active stats from a remote running instance of aprsd admin web interface.
import logging
import click
from oslo_config import cfg
import requests
from rich.console import Console
from rich.table import Table
@ -12,7 +11,7 @@ from rich.table import Table
import aprsd
from aprsd import cli_helper
from aprsd.main import cli
from aprsd.rpc import client as rpc_client
from aprsd.threads.stats import StatsStore
# setup the global logger
@ -26,83 +25,80 @@ CONF = cfg.CONF
@click.option(
"--host", type=str,
default=None,
help="IP address of the remote aprsd server to fetch stats from.",
help="IP address of the remote aprsd admin web ui fetch stats from.",
)
@click.option(
"--port", type=int,
default=None,
help="Port of the remote aprsd server rpc port to fetch stats from.",
)
@click.option(
"--magic-word", type=str,
default=None,
help="Magic word of the remote aprsd server rpc port to fetch stats from.",
help="Port of the remote aprsd web admin interface to fetch stats from.",
)
@click.pass_context
@cli_helper.process_standard_options
def fetch_stats(ctx, host, port, magic_word):
"""Fetch stats from a remote running instance of aprsd server."""
LOG.info(f"APRSD Fetch-Stats started version: {aprsd.__version__}")
def fetch_stats(ctx, host, port):
"""Fetch stats from a APRSD admin web interface."""
console = Console()
console.print(f"APRSD Fetch-Stats started version: {aprsd.__version__}")
CONF.log_opt_values(LOG, logging.DEBUG)
if not host:
host = CONF.rpc_settings.ip
host = CONF.admin.web_ip
if not port:
port = CONF.rpc_settings.port
if not magic_word:
magic_word = CONF.rpc_settings.magic_word
port = CONF.admin.web_port
msg = f"Fetching stats from {host}:{port} with magic word '{magic_word}'"
console = Console()
msg = f"Fetching stats from {host}:{port}"
console.print(msg)
with console.status(msg):
client = rpc_client.RPCClient(host, port, magic_word)
stats = client.get_stats_dict()
console.print_json(data=stats)
response = requests.get(f"http://{host}:{port}/stats", timeout=120)
if not response:
console.print(
f"Failed to fetch stats from {host}:{port}?",
style="bold red",
)
return
stats = response.json()
if not stats:
console.print(
f"Failed to fetch stats from aprsd admin ui at {host}:{port}",
style="bold red",
)
return
aprsd_title = (
"APRSD "
f"[bold cyan]v{stats['aprsd']['version']}[/] "
f"Callsign [bold green]{stats['aprsd']['callsign']}[/] "
f"Uptime [bold yellow]{stats['aprsd']['uptime']}[/]"
f"[bold cyan]v{stats['APRSDStats']['version']}[/] "
f"Callsign [bold green]{stats['APRSDStats']['callsign']}[/] "
f"Uptime [bold yellow]{stats['APRSDStats']['uptime']}[/]"
)
console.rule(f"Stats from {host}:{port} with magic word '{magic_word}'")
console.rule(f"Stats from {host}:{port}")
console.print("\n\n")
console.rule(aprsd_title)
# Show the connection to APRS
# It can be a connection to an APRS-IS server or a local TNC via KISS or KISSTCP
if "aprs-is" in stats:
title = f"APRS-IS Connection {stats['aprs-is']['server']}"
title = f"APRS-IS Connection {stats['APRSClientStats']['server_string']}"
table = Table(title=title)
table.add_column("Key")
table.add_column("Value")
for key, value in stats["aprs-is"].items():
for key, value in stats["APRSClientStats"].items():
table.add_row(key, value)
console.print(table)
threads_table = Table(title="Threads")
threads_table.add_column("Name")
threads_table.add_column("Alive?")
for name, alive in stats["aprsd"]["threads"].items():
for name, alive in stats["APRSDThreadList"].items():
threads_table.add_row(name, str(alive))
console.print(threads_table)
msgs_table = Table(title="Messages")
msgs_table.add_column("Key")
msgs_table.add_column("Value")
for key, value in stats["messages"].items():
msgs_table.add_row(key, str(value))
console.print(msgs_table)
packet_totals = Table(title="Packet Totals")
packet_totals.add_column("Key")
packet_totals.add_column("Value")
packet_totals.add_row("Total Received", str(stats["packets"]["total_received"]))
packet_totals.add_row("Total Sent", str(stats["packets"]["total_sent"]))
packet_totals.add_row("Total Tracked", str(stats["packets"]["total_tracked"]))
packet_totals.add_row("Total Received", str(stats["PacketList"]["rx"]))
packet_totals.add_row("Total Sent", str(stats["PacketList"]["tx"]))
console.print(packet_totals)
# Show each of the packet types
@ -110,47 +106,206 @@ def fetch_stats(ctx, host, port, magic_word):
packets_table.add_column("Packet Type")
packets_table.add_column("TX")
packets_table.add_column("RX")
for key, value in stats["packets"]["by_type"].items():
for key, value in stats["PacketList"]["packets"].items():
packets_table.add_row(key, str(value["tx"]), str(value["rx"]))
console.print(packets_table)
if "plugins" in stats:
count = len(stats["plugins"])
count = len(stats["PluginManager"])
plugins_table = Table(title=f"Plugins ({count})")
plugins_table.add_column("Plugin")
plugins_table.add_column("Enabled")
plugins_table.add_column("Version")
plugins_table.add_column("TX")
plugins_table.add_column("RX")
for key, value in stats["plugins"].items():
plugins = stats["PluginManager"]
for key, value in plugins.items():
plugins_table.add_row(
key,
str(stats["plugins"][key]["enabled"]),
stats["plugins"][key]["version"],
str(stats["plugins"][key]["tx"]),
str(stats["plugins"][key]["rx"]),
str(plugins[key]["enabled"]),
plugins[key]["version"],
str(plugins[key]["tx"]),
str(plugins[key]["rx"]),
)
console.print(plugins_table)
if "seen_list" in stats["aprsd"]:
count = len(stats["aprsd"]["seen_list"])
seen_list = stats.get("SeenList")
if seen_list:
count = len(seen_list)
seen_table = Table(title=f"Seen List ({count})")
seen_table.add_column("Callsign")
seen_table.add_column("Message Count")
seen_table.add_column("Last Heard")
for key, value in stats["aprsd"]["seen_list"].items():
for key, value in seen_list.items():
seen_table.add_row(key, str(value["count"]), value["last"])
console.print(seen_table)
if "watch_list" in stats["aprsd"]:
count = len(stats["aprsd"]["watch_list"])
watch_list = stats.get("WatchList")
if watch_list:
count = len(watch_list)
watch_table = Table(title=f"Watch List ({count})")
watch_table.add_column("Callsign")
watch_table.add_column("Last Heard")
for key, value in stats["aprsd"]["watch_list"].items():
for key, value in watch_list.items():
watch_table.add_row(key, value["last"])
console.print(watch_table)
@cli.command()
@cli_helper.add_options(cli_helper.common_options)
@click.option(
"--raw",
is_flag=True,
default=False,
help="Dump raw stats instead of formatted output.",
)
@click.option(
"--show-section",
default=["All"],
help="Show specific sections of the stats. "
" Choices: All, APRSDStats, APRSDThreadList, APRSClientStats,"
" PacketList, SeenList, WatchList",
multiple=True,
type=click.Choice(
[
"All",
"APRSDStats",
"APRSDThreadList",
"APRSClientStats",
"PacketList",
"SeenList",
"WatchList",
],
case_sensitive=False,
),
)
@click.pass_context
@cli_helper.process_standard_options
def dump_stats(ctx, raw, show_section):
"""Dump the current stats from the running APRSD instance."""
console = Console()
console.print(f"APRSD Dump-Stats started version: {aprsd.__version__}")
with console.status("Dumping stats"):
ss = StatsStore()
ss.load()
stats = ss.data
if raw:
if "All" in show_section:
console.print(stats)
return
else:
for section in show_section:
console.print(f"Dumping {section} section:")
console.print(stats[section])
return
t = Table(title="APRSD Stats")
t.add_column("Key")
t.add_column("Value")
for key, value in stats["APRSDStats"].items():
t.add_row(key, str(value))
if "All" in show_section or "APRSDStats" in show_section:
console.print(t)
# Show the thread list
t = Table(title="Thread List")
t.add_column("Name")
t.add_column("Class")
t.add_column("Alive?")
t.add_column("Loop Count")
t.add_column("Age")
for name, value in stats["APRSDThreadList"].items():
t.add_row(
name,
value["class"],
str(value["alive"]),
str(value["loop_count"]),
str(value["age"]),
)
if "All" in show_section or "APRSDThreadList" in show_section:
console.print(t)
# Show the plugins
t = Table(title="Plugin List")
t.add_column("Name")
t.add_column("Enabled")
t.add_column("Version")
t.add_column("TX")
t.add_column("RX")
for name, value in stats["PluginManager"].items():
t.add_row(
name,
str(value["enabled"]),
value["version"],
str(value["tx"]),
str(value["rx"]),
)
if "All" in show_section or "PluginManager" in show_section:
console.print(t)
# Now show the client stats
t = Table(title="Client Stats")
t.add_column("Key")
t.add_column("Value")
for key, value in stats["APRSClientStats"].items():
t.add_row(key, str(value))
if "All" in show_section or "APRSClientStats" in show_section:
console.print(t)
# now show the packet list
packet_list = stats.get("PacketList")
t = Table(title="Packet List")
t.add_column("Key")
t.add_column("Value")
t.add_row("Total Received", str(packet_list["rx"]))
t.add_row("Total Sent", str(packet_list["tx"]))
if "All" in show_section or "PacketList" in show_section:
console.print(t)
# now show the seen list
seen_list = stats.get("SeenList")
sorted_seen_list = sorted(
seen_list.items(),
)
t = Table(title="Seen List")
t.add_column("Callsign")
t.add_column("Message Count")
t.add_column("Last Heard")
for key, value in sorted_seen_list:
t.add_row(
key,
str(value["count"]),
str(value["last"]),
)
if "All" in show_section or "SeenList" in show_section:
console.print(t)
# now show the watch list
watch_list = stats.get("WatchList")
sorted_watch_list = sorted(
watch_list.items(),
)
t = Table(title="Watch List")
t.add_column("Callsign")
t.add_column("Last Heard")
for key, value in sorted_watch_list:
t.add_row(
key,
str(value["last"]),
)
if "All" in show_section or "WatchList" in show_section:
console.print(t)

View File

@ -13,11 +13,11 @@ from oslo_config import cfg
from rich.console import Console
import aprsd
from aprsd import cli_helper, utils
from aprsd import cli_helper
from aprsd import conf # noqa
# local imports here
from aprsd.main import cli
from aprsd.rpc import client as aprsd_rpc_client
from aprsd.threads import stats as stats_threads
# setup the global logger
@ -39,46 +39,48 @@ console = Console()
@cli_helper.process_standard_options
def healthcheck(ctx, timeout):
"""Check the health of the running aprsd server."""
console.log(f"APRSD HealthCheck version: {aprsd.__version__}")
if not CONF.rpc_settings.enabled:
LOG.error("Must enable rpc_settings.enabled to use healthcheck")
sys.exit(-1)
if not CONF.rpc_settings.ip:
LOG.error("Must enable rpc_settings.ip to use healthcheck")
sys.exit(-1)
if not CONF.rpc_settings.magic_word:
LOG.error("Must enable rpc_settings.magic_word to use healthcheck")
sys.exit(-1)
ver_str = f"APRSD HealthCheck version: {aprsd.__version__}"
console.log(ver_str)
with console.status(f"APRSD HealthCheck version: {aprsd.__version__}") as status:
with console.status(ver_str):
try:
status.update(f"Contacting APRSD via RPC {CONF.rpc_settings.ip}")
stats = aprsd_rpc_client.RPCClient().get_stats_dict()
stats_obj = stats_threads.StatsStore()
stats_obj.load()
stats = stats_obj.data
# console.print(stats)
except Exception as ex:
console.log(f"Failed to fetch healthcheck : '{ex}'")
console.log(f"Failed to load stats: '{ex}'")
sys.exit(-1)
else:
now = datetime.datetime.now()
if not stats:
console.log("No stats from aprsd")
sys.exit(-1)
email_thread_last_update = stats["email"]["thread_last_update"]
if email_thread_last_update != "never":
delta = utils.parse_delta_str(email_thread_last_update)
d = datetime.timedelta(**delta)
email_stats = stats.get("EmailStats")
if email_stats:
email_thread_last_update = email_stats["last_check_time"]
if email_thread_last_update != "never":
d = now - email_thread_last_update
max_timeout = {"hours": 0.0, "minutes": 5, "seconds": 0}
max_delta = datetime.timedelta(**max_timeout)
if d > max_delta:
console.log(f"Email thread is very old! {d}")
sys.exit(-1)
client_stats = stats.get("APRSClientStats")
if not client_stats:
console.log("No APRSClientStats")
sys.exit(-1)
else:
aprsis_last_update = client_stats["server_keepalive"]
d = now - aprsis_last_update
max_timeout = {"hours": 0.0, "minutes": 5, "seconds": 0}
max_delta = datetime.timedelta(**max_timeout)
if d > max_delta:
console.log(f"Email thread is very old! {d}")
LOG.error(f"APRS-IS last update is very old! {d}")
sys.exit(-1)
aprsis_last_update = stats["aprs-is"]["last_update"]
delta = utils.parse_delta_str(aprsis_last_update)
d = datetime.timedelta(**delta)
max_timeout = {"hours": 0.0, "minutes": 5, "seconds": 0}
max_delta = datetime.timedelta(**max_timeout)
if d > max_delta:
LOG.error(f"APRS-IS last update is very old! {d}")
sys.exit(-1)
console.log("OK")
sys.exit(0)

View File

@ -21,7 +21,7 @@ from aprsd import cli_helper
from aprsd import plugin as aprsd_plugin
from aprsd.main import cli
from aprsd.plugins import (
email, fortune, location, notify, ping, query, time, version, weather,
email, fortune, location, notify, ping, time, version, weather,
)
@ -122,7 +122,7 @@ def get_installed_extensions():
def show_built_in_plugins(console):
modules = [email, fortune, location, notify, ping, query, time, version, weather]
modules = [email, fortune, location, notify, ping, time, version, weather]
plugins = []
for module in modules:

View File

@ -10,21 +10,29 @@ import sys
import time
import click
from loguru import logger
from oslo_config import cfg
from rich.console import Console
# local imports here
import aprsd
from aprsd import cli_helper, client, packets, plugin, stats, threads
from aprsd import cli_helper, packets, plugin, threads, utils
from aprsd.client import client_factory
from aprsd.main import cli
from aprsd.rpc import server as rpc_server
from aprsd.threads import rx
from aprsd.packets import collector as packet_collector
from aprsd.packets import log as packet_log
from aprsd.packets import seen_list
from aprsd.stats import collector
from aprsd.threads import keep_alive, rx
from aprsd.threads import stats as stats_thread
from aprsd.threads.aprsd import APRSDThread
# setup the global logger
# log.basicConfig(level=log.DEBUG) # level=10
LOG = logging.getLogger("APRSD")
CONF = cfg.CONF
LOGU = logger
console = Console()
@ -37,45 +45,93 @@ def signal_handler(sig, frame):
),
)
time.sleep(5)
LOG.info(stats.APRSDStats())
# Last save to disk
collector.Collector().collect()
class APRSDListenThread(rx.APRSDRXThread):
def __init__(self, packet_queue, packet_filter=None, plugin_manager=None):
def __init__(
self, packet_queue, packet_filter=None, plugin_manager=None,
enabled_plugins=[], log_packets=False,
):
super().__init__(packet_queue)
self.packet_filter = packet_filter
self.plugin_manager = plugin_manager
if self.plugin_manager:
LOG.info(f"Plugins {self.plugin_manager.get_message_plugins()}")
self.log_packets = log_packets
def process_packet(self, *args, **kwargs):
packet = self._client.decode_packet(*args, **kwargs)
filters = {
packets.Packet.__name__: packets.Packet,
packets.AckPacket.__name__: packets.AckPacket,
packets.BeaconPacket.__name__: packets.BeaconPacket,
packets.GPSPacket.__name__: packets.GPSPacket,
packets.MessagePacket.__name__: packets.MessagePacket,
packets.MicEPacket.__name__: packets.MicEPacket,
packets.ObjectPacket.__name__: packets.ObjectPacket,
packets.StatusPacket.__name__: packets.StatusPacket,
packets.ThirdPartyPacket.__name__: packets.ThirdPartyPacket,
packets.WeatherPacket.__name__: packets.WeatherPacket,
packets.UnknownPacket.__name__: packets.UnknownPacket,
}
if self.packet_filter:
filter_class = filters[self.packet_filter]
if isinstance(packet, filter_class):
packet.log(header="RX")
if self.log_packets:
packet_log.log(packet)
if self.plugin_manager:
# Don't do anything with the reply
# This is the listen only command.
self.plugin_manager.run(packet)
else:
if self.log_packets:
LOG.error("PISS")
packet_log.log(packet)
if self.plugin_manager:
# Don't do anything with the reply.
# This is the listen only command.
self.plugin_manager.run(packet)
else:
packet.log(header="RX")
packets.PacketList().rx(packet)
packet_collector.PacketCollector().rx(packet)
class ListenStatsThread(APRSDThread):
"""Log the stats from the PacketList."""
def __init__(self):
super().__init__("PacketStatsLog")
self._last_total_rx = 0
def loop(self):
if self.loop_count % 10 == 0:
# log the stats every 10 seconds
stats_json = collector.Collector().collect()
stats = stats_json["PacketList"]
total_rx = stats["rx"]
rx_delta = total_rx - self._last_total_rx
rate = rx_delta / 10
# Log summary stats
LOGU.opt(colors=True).info(
f"<green>RX Rate: {rate} pps</green> "
f"<yellow>Total RX: {total_rx}</yellow> "
f"<red>RX Last 10 secs: {rx_delta}</red>",
)
self._last_total_rx = total_rx
# Log individual type stats
for k, v in stats["types"].items():
thread_hex = f"fg {utils.hex_from_name(k)}"
LOGU.opt(colors=True).info(
f"<{thread_hex}>{k:<15}</{thread_hex}> "
f"<blue>RX: {v['rx']}</blue> <red>TX: {v['tx']}</red>",
)
time.sleep(1)
return True
@cli.command()
@ -96,17 +152,27 @@ class APRSDListenThread(rx.APRSDRXThread):
"--packet-filter",
type=click.Choice(
[
packets.Packet.__name__,
packets.AckPacket.__name__,
packets.BeaconPacket.__name__,
packets.GPSPacket.__name__,
packets.MicEPacket.__name__,
packets.MessagePacket.__name__,
packets.ObjectPacket.__name__,
packets.RejectPacket.__name__,
packets.StatusPacket.__name__,
packets.ThirdPartyPacket.__name__,
packets.UnknownPacket.__name__,
packets.WeatherPacket.__name__,
],
case_sensitive=False,
),
help="Filter by packet type",
)
@click.option(
"--enable-plugin",
multiple=True,
help="Enable a plugin. This is the name of the file in the plugins directory.",
)
@click.option(
"--load-plugins",
default=False,
@ -118,6 +184,18 @@ class APRSDListenThread(rx.APRSDRXThread):
nargs=-1,
required=True,
)
@click.option(
"--log-packets",
default=False,
is_flag=True,
help="Log incoming packets.",
)
@click.option(
"--enable-packet-stats",
default=False,
is_flag=True,
help="Enable packet stats periodic logging.",
)
@click.pass_context
@cli_helper.process_standard_options
def listen(
@ -125,8 +203,11 @@ def listen(
aprs_login,
aprs_password,
packet_filter,
enable_plugin,
load_plugins,
filter,
log_packets,
enable_packet_stats,
):
"""Listen to packets on the APRS-IS Network based on FILTER.
@ -159,56 +240,73 @@ def listen(
LOG.info(f"APRSD Listen Started version: {aprsd.__version__}")
CONF.log_opt_values(LOG, logging.DEBUG)
collector.Collector()
# Try and load saved MsgTrack list
LOG.debug("Loading saved MsgTrack object.")
# Initialize the client factory and create
# The correct client object ready for use
client.ClientFactory.setup()
# Make sure we have 1 client transport enabled
if not client.factory.is_client_enabled():
if not client_factory.is_client_enabled():
LOG.error("No Clients are enabled in config.")
sys.exit(-1)
# Creates the client object
LOG.info("Creating client connection")
aprs_client = client.factory.create()
aprs_client = client_factory.create()
LOG.info(aprs_client)
LOG.debug(f"Filter by '{filter}'")
aprs_client.set_filter(filter)
keepalive = threads.KeepAliveThread()
keepalive.start()
keepalive = keep_alive.KeepAliveThread()
if CONF.rpc_settings.enabled:
rpc = rpc_server.APRSDRPCThread()
rpc.start()
if not CONF.enable_seen_list:
# just deregister the class from the packet collector
packet_collector.PacketCollector().unregister(seen_list.SeenList)
pm = None
pm = plugin.PluginManager()
if load_plugins:
pm = plugin.PluginManager()
LOG.info("Loading plugins")
pm.setup_plugins(load_help_plugin=False)
elif enable_plugin:
pm = plugin.PluginManager()
pm.setup_plugins(
load_help_plugin=False,
plugin_list=enable_plugin,
)
else:
LOG.warning(
"Not Loading any plugins use --load-plugins to load what's "
"defined in the config file.",
)
if pm:
for p in pm.get_plugins():
LOG.info("Loaded plugin %s", p.__class__.__name__)
stats = stats_thread.APRSDStatsStoreThread()
stats.start()
LOG.debug("Create APRSDListenThread")
listen_thread = APRSDListenThread(
packet_queue=threads.packet_queue,
packet_filter=packet_filter,
plugin_manager=pm,
enabled_plugins=enable_plugin,
log_packets=log_packets,
)
LOG.debug("Start APRSDListenThread")
listen_thread.start()
if enable_packet_stats:
listen_stats = ListenStatsThread()
listen_stats.start()
keepalive.start()
LOG.debug("keepalive Join")
keepalive.join()
LOG.debug("listen_thread Join")
listen_thread.join()
if CONF.rpc_settings.enabled:
rpc.join()
stats.join()

View File

@ -8,9 +8,13 @@ import click
from oslo_config import cfg
import aprsd
from aprsd import cli_helper, client, packets
from aprsd import cli_helper, packets
from aprsd import conf # noqa : F401
from aprsd.client import client_factory
from aprsd.main import cli
import aprsd.packets # noqa : F401
from aprsd.packets import collector
from aprsd.packets import log as packet_log
from aprsd.threads import tx
@ -76,7 +80,6 @@ def send_message(
aprs_login = CONF.aprs_network.login
if not aprs_password:
LOG.warning(CONF.aprs_network.password)
if not CONF.aprs_network.password:
click.echo("Must set --aprs-password or APRS_PASSWORD")
ctx.exit(-1)
@ -93,19 +96,15 @@ def send_message(
else:
LOG.info(f"L'{aprs_login}' To'{tocallsign}' C'{command}'")
packets.PacketList()
packets.WatchList()
packets.SeenList()
got_ack = False
got_response = False
def rx_packet(packet):
global got_ack, got_response
cl = client.factory.create()
cl = client_factory.create()
packet = cl.decode_packet(packet)
packets.PacketList().rx(packet)
packet.log("RX")
collector.PacketCollector().rx(packet)
packet_log.log(packet, tx=False)
# LOG.debug("Got packet back {}".format(packet))
if isinstance(packet, packets.AckPacket):
got_ack = True
@ -130,8 +129,7 @@ def send_message(
sys.exit(0)
try:
client.ClientFactory.setup()
client.factory.create().client
client_factory.create().client
except LoginError:
sys.exit(-1)
@ -163,7 +161,7 @@ def send_message(
# This will register a packet consumer with aprslib
# When new packets come in the consumer will process
# the packet
aprs_client = client.factory.create().client
aprs_client = client_factory.create().client
aprs_client.consumer(rx_packet, raw=False)
except aprslib.exceptions.ConnectionDrop:
LOG.error("Connection dropped, reconnecting")

View File

@ -6,12 +6,16 @@ import click
from oslo_config import cfg
import aprsd
from aprsd import cli_helper, client
from aprsd import cli_helper
from aprsd import main as aprsd_main
from aprsd import packets, plugin, threads, utils
from aprsd import plugin, threads, utils
from aprsd.client import client_factory
from aprsd.main import cli
from aprsd.rpc import server as rpc_server
from aprsd.threads import registry, rx, tx
from aprsd.packets import collector as packet_collector
from aprsd.packets import seen_list
from aprsd.threads import keep_alive, log_monitor, registry, rx
from aprsd.threads import stats as stats_thread
from aprsd.threads import tx
CONF = cfg.CONF
@ -46,7 +50,14 @@ def server(ctx, flush):
# Initialize the client factory and create
# The correct client object ready for use
client.ClientFactory.setup()
if not client_factory.is_client_enabled():
LOG.error("No Clients are enabled in config.")
sys.exit(-1)
# Creates the client object
LOG.info("Creating client connection")
aprs_client = client_factory.create()
LOG.info(aprs_client)
# Create the initial PM singleton and Register plugins
# We register plugins first here so we can register each
@ -68,35 +79,35 @@ def server(ctx, flush):
LOG.info(p)
# Make sure we have 1 client transport enabled
if not client.factory.is_client_enabled():
if not client_factory.is_client_enabled():
LOG.error("No Clients are enabled in config.")
sys.exit(-1)
if not client.factory.is_client_configured():
if not client_factory.is_client_configured():
LOG.error("APRS client is not properly configured in config file.")
sys.exit(-1)
# Creates the client object
# LOG.info("Creating client connection")
# client.factory.create().client
if not CONF.enable_seen_list:
# just deregister the class from the packet collector
packet_collector.PacketCollector().unregister(seen_list.SeenList)
# Now load the msgTrack from disk if any
packets.PacketList()
if flush:
LOG.debug("Deleting saved MsgTrack.")
packets.PacketTrack().flush()
packets.WatchList().flush()
packets.SeenList().flush()
LOG.debug("Flushing All packet tracking objects.")
packet_collector.PacketCollector().flush()
else:
# Try and load saved MsgTrack list
LOG.debug("Loading saved MsgTrack object.")
packets.PacketTrack().load()
packets.WatchList().load()
packets.SeenList().load()
LOG.debug("Loading saved packet tracking data.")
packet_collector.PacketCollector().load()
keepalive = threads.KeepAliveThread()
# Now start all the main processing threads.
keepalive = keep_alive.KeepAliveThread()
keepalive.start()
stats_store_thread = stats_thread.APRSDStatsStoreThread()
stats_store_thread.start()
rx_thread = rx.APRSDPluginRXThread(
packet_queue=threads.packet_queue,
)
@ -106,7 +117,6 @@ def server(ctx, flush):
rx_thread.start()
process_thread.start()
packets.PacketTrack().restart()
if CONF.enable_beacon:
LOG.info("Beacon Enabled. Starting Beacon thread.")
bcn_thread = tx.BeaconSendThread()
@ -117,11 +127,9 @@ def server(ctx, flush):
registry_thread = registry.APRSRegistryThread()
registry_thread.start()
if CONF.rpc_settings.enabled:
rpc = rpc_server.APRSDRPCThread()
rpc.start()
log_monitor = threads.log_monitor.LogMonitorThread()
log_monitor.start()
if CONF.admin.web_enabled:
log_monitor_thread = log_monitor.LogMonitorThread()
log_monitor_thread.start()
rx_thread.join()
process_thread.join()

View File

@ -7,7 +7,6 @@ import sys
import threading
import time
from aprslib import util as aprslib_util
import click
import flask
from flask import request
@ -22,15 +21,15 @@ import aprsd
from aprsd import (
cli_helper, client, packets, plugin_utils, stats, threads, utils,
)
from aprsd.log import log
from aprsd.client import client_factory, kiss
from aprsd.main import cli
from aprsd.threads import aprsd as aprsd_threads
from aprsd.threads import rx, tx
from aprsd.threads import keep_alive, rx, tx
from aprsd.utils import trace
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
LOG = logging.getLogger()
auth = HTTPBasicAuth()
users = {}
socketio = None
@ -63,9 +62,7 @@ def signal_handler(sig, frame):
threads.APRSDThreadList().stop_all()
if "subprocess" not in str(frame):
time.sleep(1.5)
# packets.WatchList().save()
# packets.SeenList().save()
LOG.info(stats.APRSDStats())
stats.stats_collector.collect()
LOG.info("Telling flask to bail.")
signal.signal(signal.SIGTERM, sys.exit(0))
@ -335,7 +332,6 @@ class WebChatProcessPacketThread(rx.APRSDProcessPacketThread):
def process_our_message_packet(self, packet: packets.MessagePacket):
global callsign_locations
LOG.info(f"process MessagePacket {repr(packet)}")
# ok lets see if we have the location for the
# person we just sent a message to.
from_call = packet.get("from_call").upper()
@ -381,10 +377,10 @@ def _get_transport(stats):
transport = "aprs-is"
aprs_connection = (
"APRS-IS Server: <a href='http://status.aprs2.net' >"
"{}</a>".format(stats["stats"]["aprs-is"]["server"])
"{}</a>".format(stats["APRSClientStats"]["server_string"])
)
elif client.KISSClient.is_enabled():
transport = client.KISSClient.transport()
elif kiss.KISSClient.is_enabled():
transport = kiss.KISSClient.transport()
if transport == client.TRANSPORT_TCPKISS:
aprs_connection = (
"TCPKISS://{}:{}".format(
@ -422,7 +418,7 @@ def index():
html_template = "index.html"
LOG.debug(f"Template {html_template}")
transport, aprs_connection = _get_transport(stats)
transport, aprs_connection = _get_transport(stats["stats"])
LOG.debug(f"transport {transport} aprs_connection {aprs_connection}")
stats["transport"] = transport
@ -457,27 +453,28 @@ def send_message_status():
def _stats():
stats_obj = stats.APRSDStats()
now = datetime.datetime.now()
time_format = "%m-%d-%Y %H:%M:%S"
stats_dict = stats_obj.stats()
stats_dict = stats.stats_collector.collect(serializable=True)
# Webchat doesnt need these
if "watch_list" in stats_dict["aprsd"]:
del stats_dict["aprsd"]["watch_list"]
if "seen_list" in stats_dict["aprsd"]:
del stats_dict["aprsd"]["seen_list"]
if "threads" in stats_dict["aprsd"]:
del stats_dict["aprsd"]["threads"]
# del stats_dict["email"]
# del stats_dict["plugins"]
# del stats_dict["messages"]
if "WatchList" in stats_dict:
del stats_dict["WatchList"]
if "SeenList" in stats_dict:
del stats_dict["SeenList"]
if "APRSDThreadList" in stats_dict:
del stats_dict["APRSDThreadList"]
if "PacketList" in stats_dict:
del stats_dict["PacketList"]
if "EmailStats" in stats_dict:
del stats_dict["EmailStats"]
if "PluginManager" in stats_dict:
del stats_dict["PluginManager"]
result = {
"time": now.strftime(time_format),
"stats": stats_dict,
}
return result
@ -541,18 +538,27 @@ class SendMessageNamespace(Namespace):
def on_gps(self, data):
LOG.debug(f"WS on_GPS: {data}")
lat = aprslib_util.latitude_to_ddm(data["latitude"])
long = aprslib_util.longitude_to_ddm(data["longitude"])
LOG.debug(f"Lat DDM {lat}")
LOG.debug(f"Long DDM {long}")
lat = data["latitude"]
long = data["longitude"]
LOG.debug(f"Lat {lat}")
LOG.debug(f"Long {long}")
path = data.get("path", None)
if not path:
path = []
elif "," in path:
path_opts = path.split(",")
path = [x.strip() for x in path_opts]
else:
path = [path]
tx.send(
packets.GPSPacket(
packets.BeaconPacket(
from_call=CONF.callsign,
to_call="APDW16",
latitude=lat,
longitude=long,
comment="APRSD WebChat Beacon",
path=path,
),
direct=True,
)
@ -572,8 +578,6 @@ class SendMessageNamespace(Namespace):
def init_flask(loglevel, quiet):
global socketio, flask_app
log.setup_logging(loglevel, quiet)
socketio = SocketIO(
flask_app, logger=False, engineio_logger=False,
async_mode="threading",
@ -624,7 +628,7 @@ def webchat(ctx, flush, port):
LOG.info(msg)
LOG.info(f"APRSD Started version: {aprsd.__version__}")
CONF.log_opt_values(LOG, logging.DEBUG)
CONF.log_opt_values(logging.getLogger(), logging.DEBUG)
user = CONF.admin.user
users[user] = generate_password_hash(CONF.admin.password)
if not port:
@ -632,22 +636,16 @@ def webchat(ctx, flush, port):
# Initialize the client factory and create
# The correct client object ready for use
client.ClientFactory.setup()
# Make sure we have 1 client transport enabled
if not client.factory.is_client_enabled():
if not client_factory.is_client_enabled():
LOG.error("No Clients are enabled in config.")
sys.exit(-1)
if not client.factory.is_client_configured():
if not client_factory.is_client_configured():
LOG.error("APRS client is not properly configured in config file.")
sys.exit(-1)
packets.PacketList()
packets.PacketTrack()
packets.WatchList()
packets.SeenList()
keepalive = threads.KeepAliveThread()
keepalive = keep_alive.KeepAliveThread()
LOG.info("Start KeepAliveThread")
keepalive.start()

View File

@ -15,10 +15,6 @@ watch_list_group = cfg.OptGroup(
name="watch_list",
title="Watch List settings",
)
rpc_group = cfg.OptGroup(
name="rpc_settings",
title="RPC Settings for admin <--> web",
)
webchat_group = cfg.OptGroup(
name="webchat",
title="Settings specific to the webchat command",
@ -101,6 +97,51 @@ aprsd_opts = [
default=None,
help="Longitude for the GPS Beacon button. If not set, the button will not be enabled.",
),
cfg.StrOpt(
"log_packet_format",
choices=["compact", "multiline", "both"],
default="compact",
help="When logging packets 'compact' will use a single line formatted for each packet."
"'multiline' will use multiple lines for each packet and is the traditional format."
"both will log both compact and multiline.",
),
cfg.IntOpt(
"default_packet_send_count",
default=3,
help="The number of times to send a non ack packet before giving up.",
),
cfg.IntOpt(
"default_ack_send_count",
default=3,
help="The number of times to send an ack packet in response to recieving a packet.",
),
cfg.IntOpt(
"packet_list_maxlen",
default=100,
help="The maximum number of packets to store in the packet list.",
),
cfg.IntOpt(
"packet_list_stats_maxlen",
default=20,
help="The maximum number of packets to send in the stats dict for admin ui.",
),
cfg.BoolOpt(
"enable_seen_list",
default=True,
help="Enable the Callsign seen list tracking feature. This allows aprsd to keep track of "
"callsigns that have been seen and when they were last seen.",
),
cfg.BoolOpt(
"enable_packet_logging",
default=True,
help="Set this to False, to disable logging of packets to the log file.",
),
cfg.BoolOpt(
"enable_sending_ack_packets",
default=True,
help="Set this to False, to disable sending of ack packets. This will entirely stop"
"APRSD from sending ack packets.",
),
]
watch_list_opts = [
@ -138,7 +179,7 @@ admin_opts = [
default=False,
help="Enable the Admin Web Interface",
),
cfg.IPOpt(
cfg.StrOpt(
"web_ip",
default="0.0.0.0",
help="The ip address to listen on",
@ -161,28 +202,6 @@ admin_opts = [
),
]
rpc_opts = [
cfg.BoolOpt(
"enabled",
default=True,
help="Enable RPC calls",
),
cfg.StrOpt(
"ip",
default="localhost",
help="The ip address to listen on",
),
cfg.PortOpt(
"port",
default=18861,
help="The port to listen on",
),
cfg.StrOpt(
"magic_word",
default=APRSD_DEFAULT_MAGIC_WORD,
help="Magic word to authenticate requests between client/server",
),
]
enabled_plugins_opts = [
cfg.ListOpt(
@ -192,7 +211,6 @@ enabled_plugins_opts = [
"aprsd.plugins.fortune.FortunePlugin",
"aprsd.plugins.location.LocationPlugin",
"aprsd.plugins.ping.PingPlugin",
"aprsd.plugins.query.QueryPlugin",
"aprsd.plugins.time.TimePlugin",
"aprsd.plugins.weather.OWMWeatherPlugin",
"aprsd.plugins.version.VersionPlugin",
@ -205,7 +223,7 @@ enabled_plugins_opts = [
]
webchat_opts = [
cfg.IPOpt(
cfg.StrOpt(
"web_ip",
default="0.0.0.0",
help="The ip address to listen on",
@ -225,10 +243,15 @@ webchat_opts = [
default=None,
help="Longitude for the GPS Beacon button. If not set, the button will not be enabled.",
),
cfg.BoolOpt(
"disable_url_request_logging",
default=False,
help="Disable the logging of url requests in the webchat command.",
),
]
registry_opts = [
cfg.StrOpt(
cfg.BoolOpt(
"enabled",
default=False,
help="Enable sending aprs registry information. This will let the "
@ -268,8 +291,6 @@ def register_opts(config):
config.register_opts(admin_opts, group=admin_group)
config.register_group(watch_list_group)
config.register_opts(watch_list_opts, group=watch_list_group)
config.register_group(rpc_group)
config.register_opts(rpc_opts, group=rpc_group)
config.register_group(webchat_group)
config.register_opts(webchat_opts, group=webchat_group)
config.register_group(registry_group)
@ -281,7 +302,6 @@ def list_opts():
"DEFAULT": (aprsd_opts + enabled_plugins_opts),
admin_group.name: admin_opts,
watch_list_group.name: watch_list_opts,
rpc_group.name: rpc_opts,
webchat_group.name: webchat_opts,
registry_group.name: registry_opts,
}

View File

@ -31,13 +31,6 @@ aprsfi_opts = [
),
]
query_plugin_opts = [
cfg.StrOpt(
"callsign",
help="The Ham callsign to allow access to the query plugin from RF.",
),
]
owm_wx_opts = [
cfg.StrOpt(
"apiKey",
@ -172,7 +165,6 @@ def register_opts(config):
config.register_group(aprsfi_group)
config.register_opts(aprsfi_opts, group=aprsfi_group)
config.register_group(query_group)
config.register_opts(query_plugin_opts, group=query_group)
config.register_group(owm_wx_group)
config.register_opts(owm_wx_opts, group=owm_wx_group)
config.register_group(avwx_group)
@ -184,7 +176,6 @@ def register_opts(config):
def list_opts():
return {
aprsfi_group.name: aprsfi_opts,
query_group.name: query_plugin_opts,
owm_wx_group.name: owm_wx_opts,
avwx_group.name: avwx_opts,
location_group.name: location_opts,

View File

@ -1,5 +1,4 @@
import logging
from logging import NullHandler
from logging.handlers import QueueHandler
import queue
import sys
@ -7,12 +6,28 @@ import sys
from loguru import logger
from oslo_config import cfg
from aprsd import conf
from aprsd.conf import log as conf_log
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
logging_queue = queue.Queue()
# LOG = logging.getLogger("APRSD")
LOG = logger
class QueueLatest(queue.Queue):
"""Custom Queue to keep only the latest N items.
This prevents the queue from blowing up in size.
"""
def put(self, *args, **kwargs):
try:
super().put(*args, **kwargs)
except queue.Full:
self.queue.popleft()
super().put(*args, **kwargs)
logging_queue = QueueLatest(maxsize=200)
class InterceptHandler(logging.Handler):
@ -39,7 +54,7 @@ def setup_logging(loglevel=None, quiet=False):
if not loglevel:
log_level = CONF.logging.log_level
else:
log_level = conf.log.LOG_LEVELS[loglevel]
log_level = conf_log.LOG_LEVELS[loglevel]
# intercept everything at the root logger
logging.root.handlers = [InterceptHandler()]
@ -54,9 +69,19 @@ def setup_logging(loglevel=None, quiet=False):
"aprslib.parsing",
"aprslib.exceptions",
]
webserver_list = [
"werkzeug",
"werkzeug._internal",
"socketio",
"urllib3.connectionpool",
"chardet",
"chardet.charsetgroupprober",
"chardet.eucjpprober",
"chardet.mbcharsetprober",
]
# We don't really want to see the aprslib parsing debug output.
disable_list = imap_list + aprslib_list
disable_list = imap_list + aprslib_list + webserver_list
# remove every other logger's handlers
# and propagate to root logger
@ -67,17 +92,29 @@ def setup_logging(loglevel=None, quiet=False):
else:
logging.getLogger(name).propagate = True
if CONF.webchat.disable_url_request_logging:
for name in webserver_list:
logging.getLogger(name).handlers = []
logging.getLogger(name).propagate = True
logging.getLogger(name).setLevel(logging.ERROR)
handlers = [
{
"sink": sys.stdout, "serialize": False,
"sink": sys.stdout,
"serialize": False,
"format": CONF.logging.logformat,
"colorize": True,
"level": log_level,
},
]
if CONF.logging.logfile:
handlers.append(
{
"sink": CONF.logging.logfile, "serialize": False,
"sink": CONF.logging.logfile,
"serialize": False,
"format": CONF.logging.logformat,
"colorize": False,
"level": log_level,
},
)
@ -91,25 +128,11 @@ def setup_logging(loglevel=None, quiet=False):
{
"sink": qh, "serialize": False,
"format": CONF.logging.logformat,
"level": log_level,
"colorize": False,
},
)
# configure loguru
logger.configure(handlers=handlers)
def setup_logging_no_config(loglevel, quiet):
log_level = conf.log.LOG_LEVELS[loglevel]
LOG.setLevel(log_level)
log_format = CONF.logging.logformat
date_format = CONF.logging.date_format
log_formatter = logging.Formatter(fmt=log_format, datefmt=date_format)
fh = NullHandler()
fh.setFormatter(log_formatter)
LOG.addHandler(fh)
if not quiet:
sh = logging.StreamHandler(sys.stdout)
sh.setFormatter(log_formatter)
LOG.addHandler(sh)
logger.level("DEBUG", color="<fg #BABABA>")

View File

@ -24,18 +24,17 @@ import datetime
import importlib.metadata as imp
from importlib.metadata import version as metadata_version
import logging
import os
import signal
import sys
import time
import click
import click_completion
from oslo_config import cfg, generator
# local imports here
import aprsd
from aprsd import cli_helper, packets, stats, threads, utils
from aprsd import cli_helper, packets, threads, utils
from aprsd.stats import collector
# setup the global logger
@ -44,19 +43,6 @@ CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
flask_enabled = False
rpc_serv = None
def custom_startswith(string, incomplete):
"""A custom completion match that supports case insensitive matching."""
if os.environ.get("_CLICK_COMPLETION_COMMAND_CASE_INSENSITIVE_COMPLETE"):
string = string.lower()
incomplete = incomplete.lower()
return string.startswith(incomplete)
click_completion.core.startswith = custom_startswith
click_completion.init()
@click.group(cls=cli_helper.AliasedGroup, context_settings=CONTEXT_SETTINGS)
@ -68,7 +54,7 @@ def cli(ctx):
def load_commands():
from .cmds import ( # noqa
completion, dev, fetch_stats, healthcheck, list_plugins, listen,
admin, completion, dev, fetch_stats, healthcheck, list_plugins, listen,
send_message, server, webchat,
)
@ -93,10 +79,15 @@ def signal_handler(sig, frame):
),
)
time.sleep(1.5)
packets.PacketTrack().save()
packets.WatchList().save()
packets.SeenList().save()
LOG.info(stats.APRSDStats())
try:
packets.PacketTrack().save()
packets.WatchList().save()
packets.SeenList().save()
packets.PacketList().save()
collector.Collector().collect()
except Exception as e:
LOG.error(f"Failed to save data: {e}")
sys.exit(0)
# signal.signal(signal.SIGTERM, sys.exit(0))
# sys.exit(0)
@ -122,10 +113,25 @@ def check_version(ctx):
def sample_config(ctx):
"""Generate a sample Config file from aprsd and all installed plugins."""
def _get_selected_entry_points():
import sys
if sys.version_info < (3, 10):
all = imp.entry_points()
selected = []
if "oslo.config.opts" in all:
for x in all["oslo.config.opts"]:
if x.group == "oslo.config.opts":
selected.append(x)
else:
selected = imp.entry_points(group="oslo.config.opts")
return selected
def get_namespaces():
args = []
selected = imp.entry_points(group="oslo.config.opts")
# selected = imp.entry_points(group="oslo.config.opts")
selected = _get_selected_entry_points()
for entry in selected:
if "aprsd" in entry.name:
args.append("--namespace")
@ -145,7 +151,6 @@ def sample_config(ctx):
if not sys.argv[1:]:
raise SystemExit
raise
LOG.warning(conf.namespace)
generator.generate(conf)
return

View File

@ -1,4 +0,0 @@
# What to return from a plugin if we have processed the message
# and it's ok, but don't send a usage string back
# REMOVE THIS FILE

View File

@ -1,6 +1,8 @@
from aprsd.packets import collector
from aprsd.packets.core import ( # noqa: F401
AckPacket, BeaconPacket, GPSPacket, MessagePacket, MicEPacket, Packet,
RejectPacket, StatusPacket, WeatherPacket,
AckPacket, BeaconPacket, BulletinPacket, GPSPacket, MessagePacket,
MicEPacket, ObjectPacket, Packet, RejectPacket, StatusPacket,
ThirdPartyPacket, UnknownPacket, WeatherPacket, factory,
)
from aprsd.packets.packet_list import PacketList # noqa: F401
from aprsd.packets.seen_list import SeenList # noqa: F401
@ -8,4 +10,11 @@ from aprsd.packets.tracker import PacketTrack # noqa: F401
from aprsd.packets.watch_list import WatchList # noqa: F401
# Register all the packet tracking objects.
collector.PacketCollector().register(PacketList)
collector.PacketCollector().register(SeenList)
collector.PacketCollector().register(PacketTrack)
collector.PacketCollector().register(WatchList)
NULL_MESSAGE = -1

View File

@ -0,0 +1,79 @@
import logging
from typing import Callable, Protocol, runtime_checkable
from aprsd.packets import core
from aprsd.utils import singleton
LOG = logging.getLogger("APRSD")
@runtime_checkable
class PacketMonitor(Protocol):
"""Protocol for Monitoring packets in some way."""
def rx(self, packet: type[core.Packet]) -> None:
"""When we get a packet from the network."""
...
def tx(self, packet: type[core.Packet]) -> None:
"""When we send a packet out the network."""
...
def flush(self) -> None:
"""Flush out any data."""
...
def load(self) -> None:
"""Load any data."""
...
@singleton
class PacketCollector:
def __init__(self):
self.monitors: list[Callable] = []
def register(self, monitor: Callable) -> None:
if not isinstance(monitor, PacketMonitor):
raise TypeError(f"Monitor {monitor} is not a PacketMonitor")
self.monitors.append(monitor)
def unregister(self, monitor: Callable) -> None:
if not isinstance(monitor, PacketMonitor):
raise TypeError(f"Monitor {monitor} is not a PacketMonitor")
self.monitors.remove(monitor)
def rx(self, packet: type[core.Packet]) -> None:
for name in self.monitors:
cls = name()
try:
cls.rx(packet)
except Exception as e:
LOG.error(f"Error in monitor {name} (rx): {e}")
def tx(self, packet: type[core.Packet]) -> None:
for name in self.monitors:
cls = name()
try:
cls.tx(packet)
except Exception as e:
LOG.error(f"Error in monitor {name} (tx): {e}")
def flush(self):
"""Call flush on the objects. This is used to flush out any data."""
for name in self.monitors:
cls = name()
try:
cls.flush()
except Exception as e:
LOG.error(f"Error in monitor {name} (flush): {e}")
def load(self):
"""Call load on the objects. This is used to load any data."""
for name in self.monitors:
cls = name()
try:
cls.load()
except Exception as e:
LOG.error(f"Error in monitor {name} (load): {e}")

File diff suppressed because it is too large Load Diff

161
aprsd/packets/log.py Normal file
View File

@ -0,0 +1,161 @@
import logging
from typing import Optional
from geopy.distance import geodesic
from loguru import logger
from oslo_config import cfg
from aprsd import utils
from aprsd.packets.core import AckPacket, GPSPacket, RejectPacket
LOG = logging.getLogger()
LOGU = logger
CONF = cfg.CONF
FROM_COLOR = "fg #C70039"
TO_COLOR = "fg #D033FF"
TX_COLOR = "red"
RX_COLOR = "green"
PACKET_COLOR = "cyan"
DISTANCE_COLOR = "fg #FF5733"
DEGREES_COLOR = "fg #FFA900"
def log_multiline(packet, tx: Optional[bool] = False, header: Optional[bool] = True) -> None:
"""LOG a packet to the logfile."""
if not CONF.enable_packet_logging:
return
if CONF.log_packet_format == "compact":
return
# asdict(packet)
logit = ["\n"]
name = packet.__class__.__name__
if isinstance(packet, AckPacket):
pkt_max_send_count = CONF.default_ack_send_count
else:
pkt_max_send_count = CONF.default_packet_send_count
if header:
if tx:
header_str = f"<{TX_COLOR}>TX</{TX_COLOR}>"
logit.append(
f"{header_str}________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}> "
f"TX:{packet.send_count + 1} of {pkt_max_send_count}",
)
else:
header_str = f"<{RX_COLOR}>RX</{RX_COLOR}>"
logit.append(
f"{header_str}________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}>)",
)
else:
header_str = ""
logit.append(f"__________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}>)")
# log_list.append(f" Packet : {packet.__class__.__name__}")
if packet.msgNo:
logit.append(f" Msg # : {packet.msgNo}")
if packet.from_call:
logit.append(f" From : <{FROM_COLOR}>{packet.from_call}</{FROM_COLOR}>")
if packet.to_call:
logit.append(f" To : <{TO_COLOR}>{packet.to_call}</{TO_COLOR}>")
if hasattr(packet, "path") and packet.path:
logit.append(f" Path : {'=>'.join(packet.path)}")
if hasattr(packet, "via") and packet.via:
logit.append(f" VIA : {packet.via}")
if not isinstance(packet, AckPacket) and not isinstance(packet, RejectPacket):
msg = packet.human_info
if msg:
msg = msg.replace("<", "\\<")
logit.append(f" Info : <light-yellow><b>{msg}</b></light-yellow>")
if hasattr(packet, "comment") and packet.comment:
logit.append(f" Comment : {packet.comment}")
raw = packet.raw.replace("<", "\\<")
logit.append(f" Raw : <fg #828282>{raw}</fg #828282>")
logit.append(f"{header_str}________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}>)")
LOGU.opt(colors=True).info("\n".join(logit))
LOG.debug(repr(packet))
def log(packet, tx: Optional[bool] = False, header: Optional[bool] = True) -> None:
if not CONF.enable_packet_logging:
return
if CONF.log_packet_format == "multiline":
log_multiline(packet, tx, header)
return
logit = []
name = packet.__class__.__name__
if isinstance(packet, AckPacket):
pkt_max_send_count = CONF.default_ack_send_count
else:
pkt_max_send_count = CONF.default_packet_send_count
if header:
if tx:
via_color = "red"
arrow = f"<{via_color}>\u2192</{via_color}>"
logit.append(
f"<red>TX\u2191</red> "
f"<cyan>{name}</cyan>"
f":{packet.msgNo}"
f" ({packet.send_count + 1} of {pkt_max_send_count})",
)
else:
via_color = "fg #1AA730"
arrow = f"<{via_color}>\u2192</{via_color}>"
f"<{via_color}><-</{via_color}>"
logit.append(
f"<fg #1AA730>RX\u2193</fg #1AA730> "
f"<cyan>{name}</cyan>"
f":{packet.msgNo}",
)
else:
via_color = "green"
arrow = f"<{via_color}>-></{via_color}>"
logit.append(
f"<cyan>{name}</cyan>"
f":{packet.msgNo}",
)
tmp = None
if packet.path:
tmp = f"{arrow}".join(packet.path) + f"{arrow} "
logit.append(
f"<{FROM_COLOR}>{packet.from_call}</{FROM_COLOR}> {arrow}"
f"{tmp if tmp else ' '}"
f"<{TO_COLOR}>{packet.to_call}</{TO_COLOR}>",
)
if not isinstance(packet, AckPacket) and not isinstance(packet, RejectPacket):
logit.append(":")
msg = packet.human_info
if msg:
msg = msg.replace("<", "\\<")
logit.append(f"<light-yellow><b>{msg}</b></light-yellow>")
# is there distance information?
if isinstance(packet, GPSPacket) and CONF.latitude and CONF.longitude:
my_coords = (CONF.latitude, CONF.longitude)
packet_coords = (packet.latitude, packet.longitude)
try:
bearing = utils.calculate_initial_compass_bearing(my_coords, packet_coords)
except Exception as e:
LOG.error(f"Failed to calculate bearing: {e}")
bearing = 0
logit.append(
f" : <{DEGREES_COLOR}>{utils.degrees_to_cardinal(bearing, full_string=True)}</{DEGREES_COLOR}>"
f"<{DISTANCE_COLOR}>@{geodesic(my_coords, packet_coords).miles:.2f}miles</{DISTANCE_COLOR}>",
)
LOGU.opt(colors=True).info(" ".join(logit))
log_multiline(packet, tx, header)

View File

@ -1,99 +1,100 @@
from collections import OrderedDict
from collections.abc import MutableMapping
import logging
import threading
from oslo_config import cfg
import wrapt
from aprsd import stats
from aprsd.packets import seen_list
from aprsd.packets import core
from aprsd.utils import objectstore
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class PacketList(MutableMapping):
class PacketList(objectstore.ObjectStoreMixin):
"""Class to keep track of the packets we tx/rx."""
_instance = None
lock = threading.Lock()
_total_rx: int = 0
_total_tx: int = 0
types = {}
maxlen: int = 100
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._maxlen = 100
cls.d = OrderedDict()
cls._instance.maxlen = CONF.packet_list_maxlen
cls._instance._init_data()
return cls._instance
@wrapt.synchronized(lock)
def rx(self, packet):
"""Add a packet that was received."""
self._total_rx += 1
self._add(packet)
ptype = packet.__class__.__name__
if not ptype in self.types:
self.types[ptype] = {"tx": 0, "rx": 0}
self.types[ptype]["rx"] += 1
seen_list.SeenList().update_seen(packet)
stats.APRSDStats().rx(packet)
def _init_data(self):
self.data = {
"types": {},
"packets": OrderedDict(),
}
@wrapt.synchronized(lock)
def tx(self, packet):
def rx(self, packet: type[core.Packet]):
"""Add a packet that was received."""
self._total_tx += 1
self._add(packet)
ptype = packet.__class__.__name__
if not ptype in self.types:
self.types[ptype] = {"tx": 0, "rx": 0}
self.types[ptype]["tx"] += 1
seen_list.SeenList().update_seen(packet)
stats.APRSDStats().tx(packet)
with self.lock:
self._total_rx += 1
self._add(packet)
ptype = packet.__class__.__name__
type_stats = self.data["types"].setdefault(
ptype, {"tx": 0, "rx": 0},
)
type_stats["rx"] += 1
def tx(self, packet: type[core.Packet]):
"""Add a packet that was received."""
with self.lock:
self._total_tx += 1
self._add(packet)
ptype = packet.__class__.__name__
type_stats = self.data["types"].setdefault(
ptype, {"tx": 0, "rx": 0},
)
type_stats["tx"] += 1
@wrapt.synchronized(lock)
def add(self, packet):
self._add(packet)
with self.lock:
self._add(packet)
def _add(self, packet):
self[packet.key] = packet
if not self.data.get("packets"):
self._init_data()
if packet.key in self.data["packets"]:
self.data["packets"].move_to_end(packet.key)
elif len(self.data["packets"]) == self.maxlen:
self.data["packets"].popitem(last=False)
self.data["packets"][packet.key] = packet
def copy(self):
return self.d.copy()
@property
def maxlen(self):
return self._maxlen
@wrapt.synchronized(lock)
def find(self, packet):
return self.get(packet.key)
def __getitem__(self, key):
# self.d.move_to_end(key)
return self.d[key]
def __setitem__(self, key, value):
if key in self.d:
self.d.move_to_end(key)
elif len(self.d) == self.maxlen:
self.d.popitem(last=False)
self.d[key] = value
def __delitem__(self, key):
del self.d[key]
def __iter__(self):
return self.d.__iter__()
with self.lock:
return self.data["packets"][packet.key]
def __len__(self):
return len(self.d)
with self.lock:
return len(self.data["packets"])
@wrapt.synchronized(lock)
def total_rx(self):
return self._total_rx
with self.lock:
return self._total_rx
@wrapt.synchronized(lock)
def total_tx(self):
return self._total_tx
with self.lock:
return self._total_tx
def stats(self, serializable=False) -> dict:
with self.lock:
# Get last N packets directly using list slicing
packets_list = list(self.data.get("packets", {}).values())
pkts = packets_list[-CONF.packet_list_stats_maxlen:][::-1]
stats = {
"total_tracked": self._total_rx + self._total_tx, # Fixed typo: was rx + rx
"rx": self._total_rx,
"tx": self._total_tx,
"types": self.data.get("types", {}), # Changed default from [] to {}
"packet_count": len(self.data.get("packets", [])),
"maxlen": self.maxlen,
"packets": pkts,
}
return stats

View File

@ -1,10 +1,9 @@
import datetime
import logging
import threading
from oslo_config import cfg
import wrapt
from aprsd.packets import core
from aprsd.utils import objectstore
@ -16,28 +15,35 @@ class SeenList(objectstore.ObjectStoreMixin):
"""Global callsign seen list."""
_instance = None
lock = threading.Lock()
data: dict = {}
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._init_store()
cls._instance.data = {}
return cls._instance
@wrapt.synchronized(lock)
def update_seen(self, packet):
callsign = None
if packet.from_call:
callsign = packet.from_call
else:
LOG.warning(f"Can't find FROM in packet {packet}")
return
if callsign not in self.data:
self.data[callsign] = {
"last": None,
"count": 0,
}
self.data[callsign]["last"] = str(datetime.datetime.now())
self.data[callsign]["count"] += 1
def stats(self, serializable=False):
"""Return the stats for the PacketTrack class."""
with self.lock:
return self.data
def rx(self, packet: type[core.Packet]):
"""When we get a packet from the network, update the seen list."""
with self.lock:
callsign = None
if packet.from_call:
callsign = packet.from_call
else:
LOG.warning(f"Can't find FROM in packet {packet}")
return
if callsign not in self.data:
self.data[callsign] = {
"last": None,
"count": 0,
}
self.data[callsign]["last"] = datetime.datetime.now()
self.data[callsign]["count"] += 1
def tx(self, packet: type[core.Packet]):
"""We don't care about TX packets."""

View File

@ -1,14 +1,14 @@
import datetime
import threading
import logging
from oslo_config import cfg
import wrapt
from aprsd.threads import tx
from aprsd.packets import core
from aprsd.utils import objectstore
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class PacketTrack(objectstore.ObjectStoreMixin):
@ -26,7 +26,6 @@ class PacketTrack(objectstore.ObjectStoreMixin):
_instance = None
_start_time = None
lock = threading.Lock()
data: dict = {}
total_tracked: int = 0
@ -38,74 +37,67 @@ class PacketTrack(objectstore.ObjectStoreMixin):
cls._instance._init_store()
return cls._instance
@wrapt.synchronized(lock)
def __getitem__(self, name):
return self.data[name]
with self.lock:
return self.data[name]
@wrapt.synchronized(lock)
def __iter__(self):
return iter(self.data)
with self.lock:
return iter(self.data)
@wrapt.synchronized(lock)
def keys(self):
return self.data.keys()
with self.lock:
return self.data.keys()
@wrapt.synchronized(lock)
def items(self):
return self.data.items()
with self.lock:
return self.data.items()
@wrapt.synchronized(lock)
def values(self):
return self.data.values()
with self.lock:
return self.data.values()
@wrapt.synchronized(lock)
def __len__(self):
return len(self.data)
def stats(self, serializable=False):
with self.lock:
stats = {
"total_tracked": self.total_tracked,
}
pkts = {}
for key in self.data:
last_send_time = self.data[key].last_send_time
pkts[key] = {
"last_send_time": last_send_time,
"send_count": self.data[key].send_count,
"retry_count": self.data[key].retry_count,
"message": self.data[key].raw,
}
stats["packets"] = pkts
return stats
@wrapt.synchronized(lock)
def add(self, packet):
key = packet.msgNo
packet._last_send_attempt = 0
self.data[key] = packet
self.total_tracked += 1
def rx(self, packet: type[core.Packet]) -> None:
"""When we get a packet from the network, check if we should remove it."""
if isinstance(packet, core.AckPacket):
self._remove(packet.msgNo)
elif isinstance(packet, core.RejectPacket):
self._remove(packet.msgNo)
elif hasattr(packet, "ackMsgNo"):
# Got a piggyback ack, so remove the original message
self._remove(packet.ackMsgNo)
@wrapt.synchronized(lock)
def get(self, key):
return self.data.get(key, None)
def tx(self, packet: type[core.Packet]) -> None:
"""Add a packet that was sent."""
with self.lock:
key = packet.msgNo
packet.send_count = 0
self.data[key] = packet
self.total_tracked += 1
@wrapt.synchronized(lock)
def remove(self, key):
try:
del self.data[key]
except KeyError:
pass
self._remove(key)
def restart(self):
"""Walk the list of messages and restart them if any."""
for key in self.data.keys():
pkt = self.data[key]
if pkt._last_send_attempt < pkt.retry_count:
tx.send(pkt)
def _resend(self, packet):
packet._last_send_attempt = 0
tx.send(packet)
def restart_delayed(self, count=None, most_recent=True):
"""Walk the list of delayed messages and restart them if any."""
if not count:
# Send all the delayed messages
for key in self.data.keys():
pkt = self.data[key]
if pkt._last_send_attempt == pkt._retry_count:
self._resend(pkt)
else:
# They want to resend <count> delayed messages
tmp = sorted(
self.data.items(),
reverse=most_recent,
key=lambda x: x[1].last_send_time,
)
pkt_list = tmp[:count]
for (_key, pkt) in pkt_list:
self._resend(pkt)
def _remove(self, key):
with self.lock:
try:
del self.data[key]
except KeyError:
pass

View File

@ -1,11 +1,10 @@
import datetime
import logging
import threading
from oslo_config import cfg
import wrapt
from aprsd import utils
from aprsd.packets import core
from aprsd.utils import objectstore
@ -17,56 +16,75 @@ class WatchList(objectstore.ObjectStoreMixin):
"""Global watch list and info for callsigns."""
_instance = None
lock = threading.Lock()
data = {}
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._init_store()
cls._instance.data = {}
return cls._instance
def __init__(self, config=None):
ring_size = CONF.watch_list.packet_keep_count
def __init__(self):
super().__init__()
self._update_from_conf()
if CONF.watch_list.callsigns:
for callsign in CONF.watch_list.callsigns:
call = callsign.replace("*", "")
# FIXME(waboring) - we should fetch the last time we saw
# a beacon from a callsign or some other mechanism to find
# last time a message was seen by aprs-is. For now this
# is all we can do.
self.data[call] = {
"last": datetime.datetime.now(),
"packets": utils.RingBuffer(
ring_size,
),
def _update_from_conf(self, config=None):
with self.lock:
if CONF.watch_list.enabled and CONF.watch_list.callsigns:
for callsign in CONF.watch_list.callsigns:
call = callsign.replace("*", "")
# FIXME(waboring) - we should fetch the last time we saw
# a beacon from a callsign or some other mechanism to find
# last time a message was seen by aprs-is. For now this
# is all we can do.
if call not in self.data:
self.data[call] = {
"last": None,
"packet": None,
}
def stats(self, serializable=False) -> dict:
stats = {}
with self.lock:
for callsign in self.data:
stats[callsign] = {
"last": self.data[callsign]["last"],
"packet": self.data[callsign]["packet"],
"age": self.age(callsign),
"old": self.is_old(callsign),
}
return stats
def is_enabled(self):
return CONF.watch_list.enabled
def callsign_in_watchlist(self, callsign):
return callsign in self.data
with self.lock:
return callsign in self.data
def rx(self, packet: type[core.Packet]) -> None:
"""Track when we got a packet from the network."""
callsign = packet.from_call
@wrapt.synchronized(lock)
def update_seen(self, packet):
if packet.addresse:
callsign = packet.addresse
else:
callsign = packet.from_call
if self.callsign_in_watchlist(callsign):
self.data[callsign]["last"] = datetime.datetime.now()
self.data[callsign]["packets"].append(packet)
with self.lock:
self.data[callsign]["last"] = datetime.datetime.now()
self.data[callsign]["packet"] = packet
def tx(self, packet: type[core.Packet]) -> None:
"""We don't care about TX packets."""
def last_seen(self, callsign):
if self.callsign_in_watchlist(callsign):
return self.data[callsign]["last"]
with self.lock:
if self.callsign_in_watchlist(callsign):
return self.data[callsign]["last"]
def age(self, callsign):
now = datetime.datetime.now()
return str(now - self.last_seen(callsign))
last_seen_time = self.last_seen(callsign)
if last_seen_time:
return str(now - last_seen_time)
else:
return None
def max_delta(self, seconds=None):
if not seconds:
@ -83,14 +101,19 @@ class WatchList(objectstore.ObjectStoreMixin):
We put this here so any notification plugin can use this
same test.
"""
if not self.callsign_in_watchlist(callsign):
return False
age = self.age(callsign)
if age:
delta = utils.parse_delta_str(age)
d = datetime.timedelta(**delta)
delta = utils.parse_delta_str(age)
d = datetime.timedelta(**delta)
max_delta = self.max_delta(seconds=seconds)
max_delta = self.max_delta(seconds=seconds)
if d > max_delta:
return True
if d > max_delta:
return True
else:
return False
else:
return False

View File

@ -1,4 +1,5 @@
# The base plugin class
from __future__ import annotations
import abc
import importlib
import inspect
@ -24,7 +25,6 @@ CORE_MESSAGE_PLUGINS = [
"aprsd.plugins.fortune.FortunePlugin",
"aprsd.plugins.location.LocationPlugin",
"aprsd.plugins.ping.PingPlugin",
"aprsd.plugins.query.QueryPlugin",
"aprsd.plugins.time.TimePlugin",
"aprsd.plugins.weather.USWeatherPlugin",
"aprsd.plugins.version.VersionPlugin",
@ -42,7 +42,7 @@ class APRSDPluginSpec:
"""A hook specification namespace."""
@hookspec
def filter(self, packet: packets.core.Packet):
def filter(self, packet: type[packets.Packet]):
"""My special little hook that you can customize."""
@ -65,7 +65,7 @@ class APRSDPluginBase(metaclass=abc.ABCMeta):
self.threads = self.create_threads() or []
self.start_threads()
def start_threads(self):
def start_threads(self) -> None:
if self.enabled and self.threads:
if not isinstance(self.threads, list):
self.threads = [self.threads]
@ -90,10 +90,10 @@ class APRSDPluginBase(metaclass=abc.ABCMeta):
)
@property
def message_count(self):
def message_count(self) -> int:
return self.message_counter
def help(self):
def help(self) -> str:
return "Help!"
@abc.abstractmethod
@ -118,11 +118,11 @@ class APRSDPluginBase(metaclass=abc.ABCMeta):
thread.stop()
@abc.abstractmethod
def filter(self, packet: packets.core.Packet):
def filter(self, packet: type[packets.Packet]) -> str | packets.MessagePacket:
pass
@abc.abstractmethod
def process(self, packet: packets.core.Packet):
def process(self, packet: type[packets.Packet]):
"""This is called when the filter passes."""
@ -147,14 +147,14 @@ class APRSDWatchListPluginBase(APRSDPluginBase, metaclass=abc.ABCMeta):
watch_list = CONF.watch_list.callsigns
# make sure the timeout is set or this doesn't work
if watch_list:
aprs_client = client.factory.create().client
aprs_client = client.client_factory.create().client
filter_str = "b/{}".format("/".join(watch_list))
aprs_client.set_filter(filter_str)
else:
LOG.warning("Watch list enabled, but no callsigns set.")
@hookimpl
def filter(self, packet: packets.core.Packet):
def filter(self, packet: type[packets.Packet]) -> str | packets.MessagePacket:
result = packets.NULL_MESSAGE
if self.enabled:
wl = watch_list.WatchList()
@ -206,14 +206,14 @@ class APRSDRegexCommandPluginBase(APRSDPluginBase, metaclass=abc.ABCMeta):
self.enabled = True
@hookimpl
def filter(self, packet: packets.core.MessagePacket):
LOG.info(f"{self.__class__.__name__} called")
def filter(self, packet: packets.MessagePacket) -> str | packets.MessagePacket:
LOG.debug(f"{self.__class__.__name__} called")
if not self.enabled:
result = f"{self.__class__.__name__} isn't enabled"
LOG.warning(result)
return result
if not isinstance(packet, packets.core.MessagePacket):
if not isinstance(packet, packets.MessagePacket):
LOG.warning(f"{self.__class__.__name__} Got a {packet.__class__.__name__} ignoring")
return packets.NULL_MESSAGE
@ -226,7 +226,7 @@ class APRSDRegexCommandPluginBase(APRSDPluginBase, metaclass=abc.ABCMeta):
# and is an APRS message format and has a message.
if (
tocall == CONF.callsign
and isinstance(packet, packets.core.MessagePacket)
and isinstance(packet, packets.MessagePacket)
and message
):
if re.search(self.command_regex, message, re.IGNORECASE):
@ -269,7 +269,7 @@ class HelpPlugin(APRSDRegexCommandPluginBase):
def help(self):
return "Help: send APRS help or help <plugin>"
def process(self, packet: packets.core.MessagePacket):
def process(self, packet: packets.MessagePacket):
LOG.info("HelpPlugin")
# fromcall = packet.get("from")
message = packet.message_text
@ -343,6 +343,28 @@ class PluginManager:
self._watchlist_pm = pluggy.PluginManager("aprsd")
self._watchlist_pm.add_hookspecs(APRSDPluginSpec)
def stats(self, serializable=False) -> dict:
"""Collect and return stats for all plugins."""
def full_name_with_qualname(obj):
return "{}.{}".format(
obj.__class__.__module__,
obj.__class__.__qualname__,
)
plugin_stats = {}
plugins = self.get_plugins()
if plugins:
for p in plugins:
plugin_stats[full_name_with_qualname(p)] = {
"enabled": p.enabled,
"rx": p.rx_count,
"tx": p.tx_count,
"version": p.version,
}
return plugin_stats
def is_plugin(self, obj):
for c in inspect.getmro(obj):
if issubclass(c, APRSDPluginBase):
@ -368,7 +390,9 @@ class PluginManager:
try:
module_name, class_name = module_class_string.rsplit(".", 1)
module = importlib.import_module(module_name)
module = importlib.reload(module)
# Commented out because the email thread starts in a different context
# and hence gives a different singleton for the EmailStats
# module = importlib.reload(module)
except Exception as ex:
if not module_name:
LOG.error(f"Failed to load Plugin {module_class_string}")
@ -448,7 +472,10 @@ class PluginManager:
del self._pluggy_pm
self.setup_plugins()
def setup_plugins(self, load_help_plugin=True):
def setup_plugins(
self, load_help_plugin=True,
plugin_list=[],
):
"""Create the plugin manager and register plugins."""
LOG.info("Loading APRSD Plugins")
@ -457,9 +484,13 @@ class PluginManager:
_help = HelpPlugin()
self._pluggy_pm.register(_help)
enabled_plugins = CONF.enabled_plugins
if enabled_plugins:
for p_name in enabled_plugins:
# if plugins_list is passed in, only load
# those plugins.
if plugin_list:
for plugin_name in plugin_list:
self._load_plugin(plugin_name)
elif CONF.enabled_plugins:
for p_name in CONF.enabled_plugins:
self._load_plugin(p_name)
else:
# Enabled plugins isn't set, so we default to loading all of
@ -469,12 +500,12 @@ class PluginManager:
LOG.info("Completed Plugin Loading.")
def run(self, packet: packets.core.MessagePacket):
def run(self, packet: packets.MessagePacket):
"""Execute all the plugins run method."""
with self.lock:
return self._pluggy_pm.hook.filter(packet=packet)
def run_watchlist(self, packet: packets.core.Packet):
def run_watchlist(self, packet: packets.Packet):
with self.lock:
return self._watchlist_pm.hook.filter(packet=packet)

View File

@ -11,7 +11,8 @@ import time
import imapclient
from oslo_config import cfg
from aprsd import packets, plugin, stats, threads
from aprsd import packets, plugin, threads, utils
from aprsd.stats import collector
from aprsd.threads import tx
from aprsd.utils import trace
@ -60,6 +61,38 @@ class EmailInfo:
self._delay = val
@utils.singleton
class EmailStats:
"""Singleton object to store stats related to email."""
_instance = None
tx = 0
rx = 0
email_thread_last_time = None
def stats(self, serializable=False):
if CONF.email_plugin.enabled:
last_check_time = self.email_thread_last_time
if serializable and last_check_time:
last_check_time = last_check_time.isoformat()
stats = {
"tx": self.tx,
"rx": self.rx,
"last_check_time": last_check_time,
}
else:
stats = {}
return stats
def tx_inc(self):
self.tx += 1
def rx_inc(self):
self.rx += 1
def email_thread_update(self):
self.email_thread_last_time = datetime.datetime.now()
class EmailPlugin(plugin.APRSDRegexCommandPluginBase):
"""Email Plugin."""
@ -94,6 +127,11 @@ class EmailPlugin(plugin.APRSDRegexCommandPluginBase):
shortcuts = _build_shortcuts_dict()
LOG.info(f"Email shortcuts {shortcuts}")
# Register the EmailStats producer with the stats collector
# We do this here to prevent EmailStats from being registered
# when email is not enabled in the config file.
collector.Collector().register_producer(EmailStats)
else:
LOG.info("Email services not enabled.")
self.enabled = False
@ -190,10 +228,6 @@ class EmailPlugin(plugin.APRSDRegexCommandPluginBase):
def _imap_connect():
imap_port = CONF.email_plugin.imap_port
use_ssl = CONF.email_plugin.imap_use_ssl
# host = CONFIG["aprsd"]["email"]["imap"]["host"]
# msg = "{}{}:{}".format("TLS " if use_ssl else "", host, imap_port)
# LOG.debug("Connect to IMAP host {} with user '{}'".
# format(msg, CONFIG['imap']['login']))
try:
server = imapclient.IMAPClient(
@ -440,7 +474,7 @@ def send_email(to_addr, content):
[to_addr],
msg.as_string(),
)
stats.APRSDStats().email_tx_inc()
EmailStats().tx_inc()
except Exception:
LOG.exception("Sendmail Error!!!!")
server.quit()
@ -545,7 +579,7 @@ class APRSDEmailThread(threads.APRSDThread):
def loop(self):
time.sleep(5)
stats.APRSDStats().email_thread_update()
EmailStats().email_thread_update()
# always sleep for 5 seconds and see if we need to check email
# This allows CTRL-C to stop the execution of this loop sooner
# than check_email_delay time

View File

@ -8,7 +8,7 @@ from aprsd.utils import trace
LOG = logging.getLogger("APRSD")
DEFAULT_FORTUNE_PATH = '/usr/games/fortune'
DEFAULT_FORTUNE_PATH = "/usr/games/fortune"
class FortunePlugin(plugin.APRSDRegexCommandPluginBase):
@ -45,7 +45,7 @@ class FortunePlugin(plugin.APRSDRegexCommandPluginBase):
command,
shell=True,
timeout=3,
universal_newlines=True,
text=True,
)
output = (
output.replace("\r", "")

View File

@ -2,8 +2,10 @@ import logging
import re
import time
from geopy.geocoders import ArcGIS, AzureMaps, Baidu, Bing, GoogleV3
from geopy.geocoders import HereV7, Nominatim, OpenCage, TomTom, What3WordsV3, Woosmap
from geopy.geocoders import (
ArcGIS, AzureMaps, Baidu, Bing, GoogleV3, HereV7, Nominatim, OpenCage,
TomTom, What3WordsV3, Woosmap,
)
from oslo_config import cfg
from aprsd import packets, plugin, plugin_utils
@ -39,8 +41,8 @@ class USGov:
result = plugin_utils.get_weather_gov_for_gps(lat, lon)
# LOG.info(f"WEATHER: {result}")
# LOG.info(f"area description {result['location']['areaDescription']}")
if 'location' in result:
loc = UsLocation(result['location']['areaDescription'])
if "location" in result:
loc = UsLocation(result["location"]["areaDescription"])
else:
loc = UsLocation("Unknown Location")

View File

@ -1,81 +0,0 @@
import datetime
import logging
import re
from oslo_config import cfg
from aprsd import packets, plugin
from aprsd.packets import tracker
from aprsd.utils import trace
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class QueryPlugin(plugin.APRSDRegexCommandPluginBase):
"""Query command."""
command_regex = r"^\!.*"
command_name = "query"
short_description = "APRSD Owner command to query messages in the MsgTrack"
def setup(self):
"""Do any plugin setup here."""
if not CONF.query_plugin.callsign:
LOG.error("Config query_plugin.callsign not set. Disabling plugin")
self.enabled = False
self.enabled = True
@trace.trace
def process(self, packet: packets.MessagePacket):
LOG.info("Query COMMAND")
fromcall = packet.from_call
message = packet.get("message_text", None)
pkt_tracker = tracker.PacketTrack()
now = datetime.datetime.now()
reply = "Pending messages ({}) {}".format(
len(pkt_tracker),
now.strftime("%H:%M:%S"),
)
searchstring = "^" + CONF.query_plugin.callsign + ".*"
# only I can do admin commands
if re.search(searchstring, fromcall):
# resend last N most recent: "!3"
r = re.search(r"^\!([0-9]).*", message)
if r is not None:
if len(pkt_tracker) > 0:
last_n = r.group(1)
reply = packets.NULL_MESSAGE
LOG.debug(reply)
pkt_tracker.restart_delayed(count=int(last_n))
else:
reply = "No pending msgs to resend"
LOG.debug(reply)
return reply
# resend all: "!a"
r = re.search(r"^\![aA].*", message)
if r is not None:
if len(pkt_tracker) > 0:
reply = packets.NULL_MESSAGE
LOG.debug(reply)
pkt_tracker.restart_delayed()
else:
reply = "No pending msgs"
LOG.debug(reply)
return reply
# delete all: "!d"
r = re.search(r"^\![dD].*", message)
if r is not None:
reply = "Deleted ALL pending msgs."
LOG.debug(reply)
pkt_tracker.flush()
return reply
return reply

View File

@ -1,9 +1,9 @@
import logging
import re
import time
from oslo_config import cfg
import pytz
from tzlocal import get_localzone
from aprsd import packets, plugin, plugin_utils
from aprsd.utils import fuzzy, trace
@ -22,7 +22,8 @@ class TimePlugin(plugin.APRSDRegexCommandPluginBase):
short_description = "What is the current local time."
def _get_local_tz(self):
return pytz.timezone(time.strftime("%Z"))
lz = get_localzone()
return pytz.timezone(str(lz))
def _get_utcnow(self):
return pytz.datetime.datetime.utcnow()

View File

@ -1,7 +1,8 @@
import logging
import aprsd
from aprsd import plugin, stats
from aprsd import plugin
from aprsd.stats import collector
LOG = logging.getLogger("APRSD")
@ -23,10 +24,8 @@ class VersionPlugin(plugin.APRSDRegexCommandPluginBase):
# fromcall = packet.get("from")
# message = packet.get("message_text", None)
# ack = packet.get("msgNo", "0")
stats_obj = stats.APRSDStats()
s = stats_obj.stats()
print(s)
s = collector.Collector().collect()
return "APRSD ver:{} uptime:{}".format(
aprsd.__version__,
s["aprsd"]["uptime"],
s["APRSDStats"]["uptime"],
)

View File

@ -110,7 +110,6 @@ class USMetarPlugin(plugin.APRSDRegexCommandPluginBase, plugin.APRSFIKEYMixin):
@trace.trace
def process(self, packet):
print("FISTY")
fromcall = packet.get("from")
message = packet.get("message_text", None)
# ack = packet.get("msgNo", "0")

View File

@ -1,14 +0,0 @@
import rpyc
class AuthSocketStream(rpyc.SocketStream):
"""Used to authenitcate the RPC stream to remote."""
@classmethod
def connect(cls, *args, authorizer=None, **kwargs):
stream_obj = super().connect(*args, **kwargs)
if callable(authorizer):
authorizer(stream_obj.sock)
return stream_obj

View File

@ -1,165 +0,0 @@
import json
import logging
from oslo_config import cfg
import rpyc
from aprsd import conf # noqa
from aprsd import rpc
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class RPCClient:
_instance = None
_rpc_client = None
ip = None
port = None
magic_word = None
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self, ip=None, port=None, magic_word=None):
if ip:
self.ip = ip
else:
self.ip = CONF.rpc_settings.ip
if port:
self.port = int(port)
else:
self.port = CONF.rpc_settings.port
if magic_word:
self.magic_word = magic_word
else:
self.magic_word = CONF.rpc_settings.magic_word
self._check_settings()
self.get_rpc_client()
def _check_settings(self):
if not CONF.rpc_settings.enabled:
LOG.warning("RPC is not enabled, no way to get stats!!")
if self.magic_word == conf.common.APRSD_DEFAULT_MAGIC_WORD:
LOG.warning("You are using the default RPC magic word!!!")
LOG.warning("edit aprsd.conf and change rpc_settings.magic_word")
LOG.debug(f"RPC Client: {self.ip}:{self.port} {self.magic_word}")
def _rpyc_connect(
self, host, port, service=rpyc.VoidService,
config={}, ipv6=False,
keepalive=False, authorizer=None, ):
LOG.info(f"Connecting to RPC host '{host}:{port}'")
try:
s = rpc.AuthSocketStream.connect(
host, port, ipv6=ipv6, keepalive=keepalive,
authorizer=authorizer,
)
return rpyc.utils.factory.connect_stream(s, service, config=config)
except ConnectionRefusedError:
LOG.error(f"Failed to connect to RPC host '{host}:{port}'")
return None
def get_rpc_client(self):
if not self._rpc_client:
self._rpc_client = self._rpyc_connect(
self.ip,
self.port,
authorizer=lambda sock: sock.send(self.magic_word.encode()),
)
return self._rpc_client
def get_stats_dict(self):
cl = self.get_rpc_client()
result = {}
if not cl:
return result
try:
rpc_stats_dict = cl.root.get_stats()
result = json.loads(rpc_stats_dict)
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result
def get_stats(self):
cl = self.get_rpc_client()
result = {}
if not cl:
return result
try:
result = cl.root.get_stats_obj()
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result
def get_packet_track(self):
cl = self.get_rpc_client()
result = None
if not cl:
return result
try:
result = cl.root.get_packet_track()
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result
def get_packet_list(self):
cl = self.get_rpc_client()
result = None
if not cl:
return result
try:
result = cl.root.get_packet_list()
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result
def get_watch_list(self):
cl = self.get_rpc_client()
result = None
if not cl:
return result
try:
result = cl.root.get_watch_list()
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result
def get_seen_list(self):
cl = self.get_rpc_client()
result = None
if not cl:
return result
try:
result = cl.root.get_seen_list()
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result
def get_log_entries(self):
cl = self.get_rpc_client()
result = None
if not cl:
return result
try:
result_str = cl.root.get_log_entries()
result = json.loads(result_str)
except EOFError:
LOG.error("Lost connection to RPC Host")
self._rpc_client = None
return result

View File

@ -1,99 +0,0 @@
import json
import logging
from oslo_config import cfg
import rpyc
from rpyc.utils.authenticators import AuthenticationError
from rpyc.utils.server import ThreadPoolServer
from aprsd import conf # noqa: F401
from aprsd import packets, stats, threads
from aprsd.threads import log_monitor
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
def magic_word_authenticator(sock):
client_ip = sock.getpeername()[0]
magic = sock.recv(len(CONF.rpc_settings.magic_word)).decode()
if magic != CONF.rpc_settings.magic_word:
LOG.error(
f"wrong magic word passed from {client_ip} "
"'{magic}' != '{CONF.rpc_settings.magic_word}'",
)
raise AuthenticationError(
f"wrong magic word passed in '{magic}'"
f" != '{CONF.rpc_settings.magic_word}'",
)
return sock, None
class APRSDRPCThread(threads.APRSDThread):
def __init__(self):
super().__init__(name="RPCThread")
self.thread = ThreadPoolServer(
APRSDService,
port=CONF.rpc_settings.port,
protocol_config={"allow_public_attrs": True},
authenticator=magic_word_authenticator,
)
def stop(self):
if self.thread:
self.thread.close()
self.thread_stop = True
def loop(self):
# there is no loop as run is blocked
if self.thread and not self.thread_stop:
# This is a blocking call
self.thread.start()
@rpyc.service
class APRSDService(rpyc.Service):
def on_connect(self, conn):
# code that runs when a connection is created
# (to init the service, if needed)
LOG.info("RPC Client Connected")
self._conn = conn
def on_disconnect(self, conn):
# code that runs after the connection has already closed
# (to finalize the service, if needed)
LOG.info("RPC Client Disconnected")
self._conn = None
@rpyc.exposed
def get_stats(self):
stat = stats.APRSDStats()
stats_dict = stat.stats()
return_str = json.dumps(stats_dict, indent=4, sort_keys=True, default=str)
return return_str
@rpyc.exposed
def get_stats_obj(self):
return stats.APRSDStats()
@rpyc.exposed
def get_packet_list(self):
return packets.PacketList()
@rpyc.exposed
def get_packet_track(self):
return packets.PacketTrack()
@rpyc.exposed
def get_watch_list(self):
return packets.WatchList()
@rpyc.exposed
def get_seen_list(self):
return packets.SeenList()
@rpyc.exposed
def get_log_entries(self):
entries = log_monitor.LogEntries().get_all_and_purge()
return json.dumps(entries, default=str)

View File

@ -1,266 +0,0 @@
import datetime
import logging
import threading
from oslo_config import cfg
import wrapt
import aprsd
from aprsd import packets, plugin, utils
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class APRSDStats:
_instance = None
lock = threading.Lock()
start_time = None
_aprsis_server = None
_aprsis_keepalive = None
_email_thread_last_time = None
_email_tx = 0
_email_rx = 0
_mem_current = 0
_mem_peak = 0
_thread_info = {}
_pkt_cnt = {
"Packet": {
"tx": 0,
"rx": 0,
},
"AckPacket": {
"tx": 0,
"rx": 0,
},
"GPSPacket": {
"tx": 0,
"rx": 0,
},
"StatusPacket": {
"tx": 0,
"rx": 0,
},
"MicEPacket": {
"tx": 0,
"rx": 0,
},
"MessagePacket": {
"tx": 0,
"rx": 0,
},
"WeatherPacket": {
"tx": 0,
"rx": 0,
},
"ObjectPacket": {
"tx": 0,
"rx": 0,
},
}
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
# any init here
cls._instance.start_time = datetime.datetime.now()
cls._instance._aprsis_keepalive = datetime.datetime.now()
return cls._instance
@wrapt.synchronized(lock)
@property
def uptime(self):
return datetime.datetime.now() - self.start_time
@wrapt.synchronized(lock)
@property
def memory(self):
return self._mem_current
@wrapt.synchronized(lock)
def set_memory(self, memory):
self._mem_current = memory
@wrapt.synchronized(lock)
@property
def memory_peak(self):
return self._mem_peak
@wrapt.synchronized(lock)
def set_memory_peak(self, memory):
self._mem_peak = memory
@wrapt.synchronized(lock)
def set_thread_info(self, thread_info):
self._thread_info = thread_info
@wrapt.synchronized(lock)
@property
def thread_info(self):
return self._thread_info
@wrapt.synchronized(lock)
@property
def aprsis_server(self):
return self._aprsis_server
@wrapt.synchronized(lock)
def set_aprsis_server(self, server):
self._aprsis_server = server
@wrapt.synchronized(lock)
@property
def aprsis_keepalive(self):
return self._aprsis_keepalive
@wrapt.synchronized(lock)
def set_aprsis_keepalive(self):
self._aprsis_keepalive = datetime.datetime.now()
def rx(self, packet):
pkt_type = packet.__class__.__name__
if pkt_type not in self._pkt_cnt:
self._pkt_cnt[pkt_type] = {
"tx": 0,
"rx": 0,
}
self._pkt_cnt[pkt_type]["rx"] += 1
def tx(self, packet):
pkt_type = packet.__class__.__name__
if pkt_type not in self._pkt_cnt:
self._pkt_cnt[pkt_type] = {
"tx": 0,
"rx": 0,
}
self._pkt_cnt[pkt_type]["tx"] += 1
@wrapt.synchronized(lock)
@property
def msgs_tracked(self):
return packets.PacketTrack().total_tracked
@wrapt.synchronized(lock)
@property
def email_tx(self):
return self._email_tx
@wrapt.synchronized(lock)
def email_tx_inc(self):
self._email_tx += 1
@wrapt.synchronized(lock)
@property
def email_rx(self):
return self._email_rx
@wrapt.synchronized(lock)
def email_rx_inc(self):
self._email_rx += 1
@wrapt.synchronized(lock)
@property
def email_thread_time(self):
return self._email_thread_last_time
@wrapt.synchronized(lock)
def email_thread_update(self):
self._email_thread_last_time = datetime.datetime.now()
@wrapt.synchronized(lock)
def stats(self):
now = datetime.datetime.now()
if self._email_thread_last_time:
last_update = str(now - self._email_thread_last_time)
else:
last_update = "never"
if self._aprsis_keepalive:
last_aprsis_keepalive = str(now - self._aprsis_keepalive)
else:
last_aprsis_keepalive = "never"
pm = plugin.PluginManager()
plugins = pm.get_plugins()
plugin_stats = {}
if plugins:
def full_name_with_qualname(obj):
return "{}.{}".format(
obj.__class__.__module__,
obj.__class__.__qualname__,
)
for p in plugins:
plugin_stats[full_name_with_qualname(p)] = {
"enabled": p.enabled,
"rx": p.rx_count,
"tx": p.tx_count,
"version": p.version,
}
wl = packets.WatchList()
sl = packets.SeenList()
pl = packets.PacketList()
stats = {
"aprsd": {
"version": aprsd.__version__,
"uptime": utils.strfdelta(self.uptime),
"callsign": CONF.callsign,
"memory_current": int(self.memory),
"memory_current_str": utils.human_size(self.memory),
"memory_peak": int(self.memory_peak),
"memory_peak_str": utils.human_size(self.memory_peak),
"threads": self._thread_info,
"watch_list": wl.get_all(),
"seen_list": sl.get_all(),
},
"aprs-is": {
"server": str(self.aprsis_server),
"callsign": CONF.aprs_network.login,
"last_update": last_aprsis_keepalive,
},
"packets": {
"total_tracked": int(pl.total_tx() + pl.total_rx()),
"total_sent": int(pl.total_tx()),
"total_received": int(pl.total_rx()),
"by_type": self._pkt_cnt,
},
"messages": {
"sent": self._pkt_cnt["MessagePacket"]["tx"],
"received": self._pkt_cnt["MessagePacket"]["tx"],
"ack_sent": self._pkt_cnt["AckPacket"]["tx"],
},
"email": {
"enabled": CONF.email_plugin.enabled,
"sent": int(self._email_tx),
"received": int(self._email_rx),
"thread_last_update": last_update,
},
"plugins": plugin_stats,
}
return stats
def __str__(self):
pl = packets.PacketList()
return (
"Uptime:{} Msgs TX:{} RX:{} "
"ACK: TX:{} RX:{} "
"Email TX:{} RX:{} LastLoop:{} ".format(
self.uptime,
pl.total_tx(),
pl.total_rx(),
self._pkt_cnt["AckPacket"]["tx"],
self._pkt_cnt["AckPacket"]["rx"],
self._email_tx,
self._email_rx,
self._email_thread_last_time,
)
)

18
aprsd/stats/__init__.py Normal file
View File

@ -0,0 +1,18 @@
from aprsd import plugin
from aprsd.client import stats as client_stats
from aprsd.packets import packet_list, seen_list, tracker, watch_list
from aprsd.stats import app, collector
from aprsd.threads import aprsd
# Create the collector and register all the objects
# that APRSD has that implement the stats protocol
stats_collector = collector.Collector()
stats_collector.register_producer(app.APRSDStats)
stats_collector.register_producer(packet_list.PacketList)
stats_collector.register_producer(watch_list.WatchList)
stats_collector.register_producer(tracker.PacketTrack)
stats_collector.register_producer(plugin.PluginManager)
stats_collector.register_producer(aprsd.APRSDThreadList)
stats_collector.register_producer(client_stats.APRSClientStats)
stats_collector.register_producer(seen_list.SeenList)

49
aprsd/stats/app.py Normal file
View File

@ -0,0 +1,49 @@
import datetime
import tracemalloc
from oslo_config import cfg
import aprsd
from aprsd import utils
from aprsd.log import log as aprsd_log
CONF = cfg.CONF
class APRSDStats:
"""The AppStats class is used to collect stats from the application."""
_instance = None
start_time = None
def __new__(cls, *args, **kwargs):
"""Have to override the new method to make this a singleton
instead of using @singletone decorator so the unit tests work.
"""
if not cls._instance:
cls._instance = super().__new__(cls)
cls._instance.start_time = datetime.datetime.now()
return cls._instance
def uptime(self):
return datetime.datetime.now() - self.start_time
def stats(self, serializable=False) -> dict:
current, peak = tracemalloc.get_traced_memory()
uptime = self.uptime()
qsize = aprsd_log.logging_queue.qsize()
if serializable:
uptime = str(uptime)
stats = {
"version": aprsd.__version__,
"uptime": uptime,
"callsign": CONF.callsign,
"memory_current": int(current),
"memory_current_str": utils.human_size(current),
"memory_peak": int(peak),
"memory_peak_str": utils.human_size(peak),
"loging_queue": qsize,
}
return stats

42
aprsd/stats/collector.py Normal file
View File

@ -0,0 +1,42 @@
import logging
from typing import Callable, Protocol, runtime_checkable
from aprsd.utils import singleton
LOG = logging.getLogger("APRSD")
@runtime_checkable
class StatsProducer(Protocol):
"""The StatsProducer protocol is used to define the interface for collecting stats."""
def stats(self, serializable=False) -> dict:
"""provide stats in a dictionary format."""
...
@singleton
class Collector:
"""The Collector class is used to collect stats from multiple StatsProducer instances."""
def __init__(self):
self.producers: list[Callable] = []
def collect(self, serializable=False) -> dict:
stats = {}
for name in self.producers:
cls = name()
try:
stats[cls.__class__.__name__] = cls.stats(serializable=serializable).copy()
except Exception as e:
LOG.error(f"Error in producer {name} (stats): {e}")
return stats
def register_producer(self, producer_name: Callable):
if not isinstance(producer_name, StatsProducer):
raise TypeError(f"Producer {producer_name} is not a StatsProducer")
self.producers.append(producer_name)
def unregister_producer(self, producer_name: Callable):
if not isinstance(producer_name, StatsProducer):
raise TypeError(f"Producer {producer_name} is not a StatsProducer")
self.producers.remove(producer_name)

View File

@ -3,8 +3,9 @@ import queue
# Make these available to anyone importing
# aprsd.threads
from .aprsd import APRSDThread, APRSDThreadList # noqa: F401
from .keep_alive import KeepAliveThread # noqa: F401
from .rx import APRSDRXThread, APRSDDupeRXThread, APRSDProcessPacketThread # noqa: F401
from .rx import ( # noqa: F401
APRSDDupeRXThread, APRSDProcessPacketThread, APRSDRXThread,
)
packet_queue = queue.Queue(maxsize=20)

View File

@ -2,6 +2,7 @@ import abc
import datetime
import logging
import threading
from typing import List
import wrapt
@ -9,43 +10,10 @@ import wrapt
LOG = logging.getLogger("APRSD")
class APRSDThreadList:
"""Singleton class that keeps track of application wide threads."""
_instance = None
threads_list = []
lock = threading.Lock()
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls.threads_list = []
return cls._instance
@wrapt.synchronized(lock)
def add(self, thread_obj):
self.threads_list.append(thread_obj)
@wrapt.synchronized(lock)
def remove(self, thread_obj):
self.threads_list.remove(thread_obj)
@wrapt.synchronized(lock)
def stop_all(self):
"""Iterate over all threads and call stop on them."""
for th in self.threads_list:
LOG.info(f"Stopping Thread {th.name}")
if hasattr(th, "packet"):
LOG.info(F"{th.name} packet {th.packet}")
th.stop()
@wrapt.synchronized(lock)
def __len__(self):
return len(self.threads_list)
class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
"""Base class for all threads in APRSD."""
loop_count = 1
def __init__(self, name):
super().__init__(name=name)
@ -79,6 +47,7 @@ class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
def run(self):
LOG.debug("Starting")
while not self._should_quit():
self.loop_count += 1
can_loop = self.loop()
self._last_loop = datetime.datetime.now()
if not can_loop:
@ -86,3 +55,65 @@ class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
self._cleanup()
APRSDThreadList().remove(self)
LOG.debug("Exiting")
class APRSDThreadList:
"""Singleton class that keeps track of application wide threads."""
_instance = None
threads_list: List[APRSDThread] = []
lock = threading.Lock()
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls.threads_list = []
return cls._instance
def stats(self, serializable=False) -> dict:
stats = {}
for th in self.threads_list:
age = th.loop_age()
if serializable:
age = str(age)
stats[th.name] = {
"name": th.name,
"class": th.__class__.__name__,
"alive": th.is_alive(),
"age": th.loop_age(),
"loop_count": th.loop_count,
}
return stats
@wrapt.synchronized(lock)
def add(self, thread_obj):
self.threads_list.append(thread_obj)
@wrapt.synchronized(lock)
def remove(self, thread_obj):
self.threads_list.remove(thread_obj)
@wrapt.synchronized(lock)
def stop_all(self):
"""Iterate over all threads and call stop on them."""
for th in self.threads_list:
LOG.info(f"Stopping Thread {th.name}")
if hasattr(th, "packet"):
LOG.info(F"{th.name} packet {th.packet}")
th.stop()
@wrapt.synchronized(lock)
def info(self):
"""Go through all the threads and collect info about each."""
info = {}
for thread in self.threads_list:
alive = thread.is_alive()
age = thread.loop_age()
key = thread.__class__.__name__
info[key] = {"alive": True if alive else False, "age": age, "name": thread.name}
return info
@wrapt.synchronized(lock)
def __len__(self):
return len(self.threads_list)

View File

@ -3,14 +3,19 @@ import logging
import time
import tracemalloc
from loguru import logger
from oslo_config import cfg
from aprsd import client, packets, stats, utils
from aprsd import packets, utils
from aprsd.client import client_factory
from aprsd.log import log as aprsd_log
from aprsd.stats import collector
from aprsd.threads import APRSDThread, APRSDThreadList
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
LOGU = logger
class KeepAliveThread(APRSDThread):
@ -24,64 +29,75 @@ class KeepAliveThread(APRSDThread):
self.max_delta = datetime.timedelta(**max_timeout)
def loop(self):
if self.cntr % 60 == 0:
pkt_tracker = packets.PacketTrack()
stats_obj = stats.APRSDStats()
if self.loop_count % 60 == 0:
stats_json = collector.Collector().collect()
pl = packets.PacketList()
thread_list = APRSDThreadList()
now = datetime.datetime.now()
last_email = stats_obj.email_thread_time
if last_email:
email_thread_time = utils.strfdelta(now - last_email)
if "EmailStats" in stats_json:
email_stats = stats_json["EmailStats"]
if email_stats.get("last_check_time"):
email_thread_time = utils.strfdelta(now - email_stats["last_check_time"])
else:
email_thread_time = "N/A"
else:
email_thread_time = "N/A"
last_msg_time = utils.strfdelta(now - stats_obj.aprsis_keepalive)
if "APRSClientStats" in stats_json and stats_json["APRSClientStats"].get("transport") == "aprsis":
if stats_json["APRSClientStats"].get("server_keepalive"):
last_msg_time = utils.strfdelta(now - stats_json["APRSClientStats"]["server_keepalive"])
else:
last_msg_time = "N/A"
else:
last_msg_time = "N/A"
current, peak = tracemalloc.get_traced_memory()
stats_obj.set_memory(current)
stats_obj.set_memory_peak(peak)
login = CONF.callsign
tracked_packets = len(pkt_tracker)
tracked_packets = stats_json["PacketTrack"]["total_tracked"]
tx_msg = 0
rx_msg = 0
if "PacketList" in stats_json:
msg_packets = stats_json["PacketList"].get("MessagePacket")
if msg_packets:
tx_msg = msg_packets.get("tx", 0)
rx_msg = msg_packets.get("rx", 0)
keepalive = (
"{} - Uptime {} RX:{} TX:{} Tracker:{} Msgs TX:{} RX:{} "
"Last:{} Email: {} - RAM Current:{} Peak:{} Threads:{}"
"Last:{} Email: {} - RAM Current:{} Peak:{} Threads:{} LoggingQueue:{}"
).format(
login,
utils.strfdelta(stats_obj.uptime),
stats_json["APRSDStats"]["callsign"],
stats_json["APRSDStats"]["uptime"],
pl.total_rx(),
pl.total_tx(),
tracked_packets,
stats_obj._pkt_cnt["MessagePacket"]["tx"],
stats_obj._pkt_cnt["MessagePacket"]["rx"],
tx_msg,
rx_msg,
last_msg_time,
email_thread_time,
utils.human_size(current),
utils.human_size(peak),
stats_json["APRSDStats"]["memory_current_str"],
stats_json["APRSDStats"]["memory_peak_str"],
len(thread_list),
aprsd_log.logging_queue.qsize(),
)
LOG.info(keepalive)
thread_out = []
thread_info = {}
for thread in thread_list.threads_list:
alive = thread.is_alive()
age = thread.loop_age()
key = thread.__class__.__name__
thread_out.append(f"{key}:{alive}:{age}")
if key not in thread_info:
thread_info[key] = {}
thread_info[key]["alive"] = alive
thread_info[key]["age"] = age
if not alive:
LOG.error(f"Thread {thread}")
LOG.info(",".join(thread_out))
stats_obj.set_thread_info(thread_info)
if "APRSDThreadList" in stats_json:
thread_list = stats_json["APRSDThreadList"]
for thread_name in thread_list:
thread = thread_list[thread_name]
alive = thread["alive"]
age = thread["age"]
key = thread["name"]
if not alive:
LOG.error(f"Thread {thread}")
thread_hex = f"fg {utils.hex_from_name(key)}"
t_name = f"<{thread_hex}>{key:<15}</{thread_hex}>"
thread_msg = f"{t_name} Alive? {str(alive): <5} {str(age): <20}"
LOGU.opt(colors=True).info(thread_msg)
# LOG.info(f"{key: <15} Alive? {str(alive): <5} {str(age): <20}")
# check the APRS connection
cl = client.factory.create()
cl = client_factory.create()
# Reset the connection if it's dead and this isn't our
# First time through the loop.
# The first time through the loop can happen at startup where
@ -89,19 +105,19 @@ class KeepAliveThread(APRSDThread):
# to make it's connection the first time.
if not cl.is_alive() and self.cntr > 0:
LOG.error(f"{cl.__class__.__name__} is not alive!!! Resetting")
client.factory.create().reset()
else:
# See if we should reset the aprs-is client
# Due to losing a keepalive from them
delta_dict = utils.parse_delta_str(last_msg_time)
delta = datetime.timedelta(**delta_dict)
if delta > self.max_delta:
# We haven't gotten a keepalive from aprs-is in a while
# reset the connection.a
if not client.KISSClient.is_enabled():
LOG.warning(f"Resetting connection to APRS-IS {delta}")
client.factory.create().reset()
client_factory.create().reset()
# else:
# # See if we should reset the aprs-is client
# # Due to losing a keepalive from them
# delta_dict = utils.parse_delta_str(last_msg_time)
# delta = datetime.timedelta(**delta_dict)
#
# if delta > self.max_delta:
# # We haven't gotten a keepalive from aprs-is in a while
# # reset the connection.a
# if not client.KISSClient.is_enabled():
# LOG.warning(f"Resetting connection to APRS-IS {delta}")
# client.factory.create().reset()
# Check version every day
delta = now - self.checker_time
@ -110,6 +126,6 @@ class KeepAliveThread(APRSDThread):
level, msg = utils._check_version()
if level:
LOG.warning(msg)
self.cntr += 1
self.cntr += 1
time.sleep(1)
return True

View File

@ -1,25 +1,54 @@
import datetime
import logging
import threading
from oslo_config import cfg
import requests
import wrapt
from aprsd import threads
from aprsd.log import log
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
def send_log_entries(force=False):
"""Send all of the log entries to the web interface."""
if CONF.admin.web_enabled:
if force or LogEntries().is_purge_ready():
entries = LogEntries().get_all_and_purge()
if entries:
try:
requests.post(
f"http://{CONF.admin.web_ip}:{CONF.admin.web_port}/log_entries",
json=entries,
auth=(CONF.admin.user, CONF.admin.password),
)
except Exception:
LOG.warning(f"Failed to send log entries. len={len(entries)}")
class LogEntries:
entries = []
lock = threading.Lock()
_instance = None
last_purge = datetime.datetime.now()
max_delta = datetime.timedelta(
hours=0.0, minutes=0, seconds=2,
)
def __new__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def stats(self) -> dict:
return {
"log_entries": self.entries,
}
@wrapt.synchronized(lock)
def add(self, entry):
self.entries.append(entry)
@ -28,8 +57,18 @@ class LogEntries:
def get_all_and_purge(self):
entries = self.entries.copy()
self.entries = []
self.last_purge = datetime.datetime.now()
return entries
def is_purge_ready(self):
now = datetime.datetime.now()
if (
now - self.last_purge > self.max_delta
and len(self.entries) > 1
):
return True
return False
@wrapt.synchronized(lock)
def __len__(self):
return len(self.entries)
@ -40,6 +79,10 @@ class LogMonitorThread(threads.APRSDThread):
def __init__(self):
super().__init__("LogMonitorThread")
def stop(self):
send_log_entries(force=True)
super().stop()
def loop(self):
try:
record = log.logging_queue.get(block=True, timeout=2)
@ -54,6 +97,7 @@ class LogMonitorThread(threads.APRSDThread):
# Just ignore thi
pass
send_log_entries()
return True
def json_record(self, record):

View File

@ -6,7 +6,10 @@ import time
import aprslib
from oslo_config import cfg
from aprsd import client, packets, plugin
from aprsd import packets, plugin
from aprsd.client import client_factory
from aprsd.packets import collector
from aprsd.packets import log as packet_log
from aprsd.threads import APRSDThread, tx
@ -16,15 +19,20 @@ LOG = logging.getLogger("APRSD")
class APRSDRXThread(APRSDThread):
def __init__(self, packet_queue):
super().__init__("RX_MSG")
super().__init__("RX_PKT")
self.packet_queue = packet_queue
self._client = client.factory.create()
self._client = client_factory.create()
def stop(self):
self.thread_stop = True
client.factory.create().client.stop()
if self._client:
self._client.stop()
def loop(self):
if not self._client:
self._client = client_factory.create()
time.sleep(1)
return True
# setup the consumer of messages and block until a messages
try:
# This will register a packet consumer with aprslib
@ -36,23 +44,32 @@ class APRSDRXThread(APRSDThread):
# and the aprslib developer didn't want to allow a PR to add
# kwargs. :(
# https://github.com/rossengeorgiev/aprs-python/pull/56
self._client.client.consumer(
self.process_packet, raw=False, blocking=False,
self._client.consumer(
self._process_packet, raw=False, blocking=False,
)
except (
aprslib.exceptions.ConnectionDrop,
aprslib.exceptions.ConnectionError,
):
LOG.error("Connection dropped, reconnecting")
time.sleep(5)
# Force the deletion of the client object connected to aprs
# This will cause a reconnect, next time client.get_client()
# is called
self._client.reset()
time.sleep(5)
except Exception:
# LOG.exception(ex)
LOG.error("Resetting connection and trying again.")
self._client.reset()
time.sleep(5)
# Continue to loop
return True
def _process_packet(self, *args, **kwargs):
"""Intermediate callback so we can update the keepalive time."""
# Now call the 'real' packet processing for a RX'x packet
self.process_packet(*args, **kwargs)
@abc.abstractmethod
def process_packet(self, *args, **kwargs):
pass
@ -80,7 +97,8 @@ class APRSDDupeRXThread(APRSDRXThread):
"""
packet = self._client.decode_packet(*args, **kwargs)
# LOG.debug(raw)
packet.log(header="RX")
packet_log.log(packet)
pkt_list = packets.PacketList()
if isinstance(packet, packets.AckPacket):
# We don't need to drop AckPackets, those should be
@ -91,7 +109,6 @@ class APRSDDupeRXThread(APRSDRXThread):
# For RF based APRS Clients we can get duplicate packets
# So we need to track them and not process the dupes.
found = False
pkt_list = packets.PacketList()
try:
# Find the packet in the list of already seen packets
# Based on the packet.key
@ -100,14 +117,11 @@ class APRSDDupeRXThread(APRSDRXThread):
found = False
if not found:
# If we are in the process of already ack'ing
# a packet, we should drop the packet
# because it's a dupe within the time that
# we send the 3 acks for the packet.
pkt_list.rx(packet)
# We haven't seen this packet before, so we process it.
collector.PacketCollector().rx(packet)
self.packet_queue.put(packet)
elif packet.timestamp - found.timestamp < CONF.packet_dupe_timeout:
# If the packet came in within 60 seconds of the
# If the packet came in within N seconds of the
# Last time seeing the packet, then we drop it as a dupe.
LOG.warning(f"Packet {packet.from_call}:{packet.msgNo} already tracked, dropping.")
else:
@ -115,7 +129,7 @@ class APRSDDupeRXThread(APRSDRXThread):
f"Packet {packet.from_call}:{packet.msgNo} already tracked "
f"but older than {CONF.packet_dupe_timeout} seconds. processing.",
)
pkt_list.rx(packet)
collector.PacketCollector().rx(packet)
self.packet_queue.put(packet)
@ -137,21 +151,29 @@ class APRSDProcessPacketThread(APRSDThread):
def __init__(self, packet_queue):
self.packet_queue = packet_queue
super().__init__("ProcessPKT")
self._loop_cnt = 1
if not CONF.enable_sending_ack_packets:
LOG.warning(
"Sending ack packets is disabled, messages "
"will not be acknowledged.",
)
def process_ack_packet(self, packet):
"""We got an ack for a message, no need to resend it."""
ack_num = packet.msgNo
LOG.info(f"Got ack for message {ack_num}")
pkt_tracker = packets.PacketTrack()
pkt_tracker.remove(ack_num)
LOG.debug(f"Got ack for message {ack_num}")
collector.PacketCollector().rx(packet)
def process_piggyback_ack(self, packet):
"""We got an ack embedded in a packet."""
ack_num = packet.ackMsgNo
LOG.debug(f"Got PiggyBackAck for message {ack_num}")
collector.PacketCollector().rx(packet)
def process_reject_packet(self, packet):
"""We got a reject message for a packet. Stop sending the message."""
ack_num = packet.msgNo
LOG.info(f"Got REJECT for message {ack_num}")
pkt_tracker = packets.PacketTrack()
pkt_tracker.remove(ack_num)
LOG.debug(f"Got REJECT for message {ack_num}")
collector.PacketCollector().rx(packet)
def loop(self):
try:
@ -160,12 +182,11 @@ class APRSDProcessPacketThread(APRSDThread):
self.process_packet(packet)
except queue.Empty:
pass
self._loop_cnt += 1
return True
def process_packet(self, packet):
"""Process a packet received from aprs-is server."""
LOG.debug(f"ProcessPKT-LOOP {self._loop_cnt}")
LOG.debug(f"ProcessPKT-LOOP {self.loop_count}")
our_call = CONF.callsign.lower()
from_call = packet.from_call
@ -188,6 +209,10 @@ class APRSDProcessPacketThread(APRSDThread):
):
self.process_reject_packet(packet)
else:
if hasattr(packet, "ackMsgNo") and packet.ackMsgNo:
# we got an ack embedded in this packet
# we need to handle the ack
self.process_piggyback_ack(packet)
# Only ack messages that were sent directly to us
if isinstance(packet, packets.MessagePacket):
if to_call and to_call.lower() == our_call:

44
aprsd/threads/stats.py Normal file
View File

@ -0,0 +1,44 @@
import logging
import threading
import time
from oslo_config import cfg
import wrapt
from aprsd.stats import collector
from aprsd.threads import APRSDThread
from aprsd.utils import objectstore
CONF = cfg.CONF
LOG = logging.getLogger("APRSD")
class StatsStore(objectstore.ObjectStoreMixin):
"""Container to save the stats from the collector."""
lock = threading.Lock()
data = {}
@wrapt.synchronized(lock)
def add(self, stats: dict):
self.data = stats
class APRSDStatsStoreThread(APRSDThread):
"""Save APRSD Stats to disk periodically."""
# how often in seconds to write the file
save_interval = 10
def __init__(self):
super().__init__("StatsStore")
def loop(self):
if self.loop_count % self.save_interval == 0:
stats = collector.Collector().collect()
ss = StatsStore()
ss.add(stats)
ss.save()
time.sleep(1)
return True

View File

@ -1,4 +1,5 @@
import logging
import threading
import time
from oslo_config import cfg
@ -6,11 +7,14 @@ from rush import quota, throttle
from rush.contrib import decorator
from rush.limiters import periodic
from rush.stores import dictionary
import wrapt
from aprsd import client
from aprsd import conf # noqa
from aprsd import threads as aprsd_threads
from aprsd.packets import core, tracker
from aprsd.client import client_factory
from aprsd.packets import collector, core
from aprsd.packets import log as packet_log
from aprsd.packets import tracker
CONF = cfg.CONF
@ -35,16 +39,24 @@ ack_t = throttle.Throttle(
msg_throttle_decorator = decorator.ThrottleDecorator(throttle=msg_t)
ack_throttle_decorator = decorator.ThrottleDecorator(throttle=ack_t)
s_lock = threading.Lock()
@wrapt.synchronized(s_lock)
@msg_throttle_decorator.sleep_and_retry
def send(packet: core.Packet, direct=False, aprs_client=None):
"""Send a packet either in a thread or directly to the client."""
# prepare the packet for sending.
# This constructs the packet.raw
packet.prepare()
# Have to call the collector to track the packet
# After prepare, as prepare assigns the msgNo
collector.PacketCollector().tx(packet)
if isinstance(packet, core.AckPacket):
_send_ack(packet, direct=direct, aprs_client=aprs_client)
if CONF.enable_sending_ack_packets:
_send_ack(packet, direct=direct, aprs_client=aprs_client)
else:
LOG.info("Sending ack packets is disabled. Not sending AckPacket.")
else:
_send_packet(packet, direct=direct, aprs_client=aprs_client)
@ -71,11 +83,18 @@ def _send_direct(packet, aprs_client=None):
if aprs_client:
cl = aprs_client
else:
cl = client.factory.create()
cl = client_factory.create()
packet.update_timestamp()
packet.log(header="TX")
cl.send(packet)
packet_log.log(packet, tx=True)
try:
cl.send(packet)
except Exception as e:
LOG.error(f"Failed to send packet: {packet}")
LOG.error(e)
return False
else:
return True
class SendPacketThread(aprsd_threads.APRSDThread):
@ -83,10 +102,7 @@ class SendPacketThread(aprsd_threads.APRSDThread):
def __init__(self, packet):
self.packet = packet
name = self.packet.raw[:5]
super().__init__(f"TXPKT-{self.packet.msgNo}-{name}")
pkt_tracker = tracker.PacketTrack()
pkt_tracker.add(packet)
super().__init__(f"TX-{packet.to_call}-{self.packet.msgNo}")
def loop(self):
"""Loop until a message is acked or it gets delayed.
@ -112,7 +128,7 @@ class SendPacketThread(aprsd_threads.APRSDThread):
return False
else:
send_now = False
if packet.send_count == packet.retry_count:
if packet.send_count >= packet.retry_count:
# we reached the send limit, don't send again
# TODO(hemna) - Need to put this in a delayed queue?
LOG.info(
@ -121,8 +137,7 @@ class SendPacketThread(aprsd_threads.APRSDThread):
"Message Send Complete. Max attempts reached"
f" {packet.retry_count}",
)
if not packet.allow_delay:
pkt_tracker.remove(packet.msgNo)
pkt_tracker.remove(packet.msgNo)
return False
# Message is still outstanding and needs to be acked.
@ -141,8 +156,17 @@ class SendPacketThread(aprsd_threads.APRSDThread):
# no attempt time, so lets send it, and start
# tracking the time.
packet.last_send_time = int(round(time.time()))
send(packet, direct=True)
packet.send_count += 1
sent = False
try:
sent = _send_direct(packet)
except Exception:
LOG.error(f"Failed to send packet: {packet}")
else:
# If an exception happens while sending
# we don't want this attempt to count
# against the packet
if sent:
packet.send_count += 1
time.sleep(1)
# Make sure we get called again.
@ -152,22 +176,24 @@ class SendPacketThread(aprsd_threads.APRSDThread):
class SendAckThread(aprsd_threads.APRSDThread):
loop_count: int = 1
max_retries = 3
def __init__(self, packet):
self.packet = packet
super().__init__(f"SendAck-{self.packet.msgNo}")
super().__init__(f"TXAck-{packet.to_call}-{self.packet.msgNo}")
self.max_retries = CONF.default_ack_send_count
def loop(self):
"""Separate thread to send acks with retries."""
send_now = False
if self.packet.send_count == self.packet.retry_count:
if self.packet.send_count == self.max_retries:
# we reached the send limit, don't send again
# TODO(hemna) - Need to put this in a delayed queue?
LOG.info(
LOG.debug(
f"{self.packet.__class__.__name__}"
f"({self.packet.msgNo}) "
"Send Complete. Max attempts reached"
f" {self.packet.retry_count}",
f" {self.max_retries}",
)
return False
@ -188,8 +214,18 @@ class SendAckThread(aprsd_threads.APRSDThread):
send_now = True
if send_now:
send(self.packet, direct=True)
self.packet.send_count += 1
sent = False
try:
sent = _send_direct(self.packet)
except Exception:
LOG.error(f"Failed to send packet: {self.packet}")
else:
# If an exception happens while sending
# we don't want this attempt to count
# against the packet
if sent:
self.packet.send_count += 1
self.packet.last_send_time = int(round(time.time()))
time.sleep(1)
@ -230,7 +266,15 @@ class BeaconSendThread(aprsd_threads.APRSDThread):
comment="APRSD GPS Beacon",
symbol=CONF.beacon_symbol,
)
send(pkt, direct=True)
try:
# Only send it once
pkt.retry_count = 1
send(pkt, direct=True)
except Exception as e:
LOG.error(f"Failed to send beacon: {e}")
client_factory.create().reset()
time.sleep(5)
self._loop_cnt += 1
time.sleep(1)
return True

View File

@ -1,6 +1,8 @@
"""Utilities and helper functions."""
import errno
import functools
import math
import os
import re
import sys
@ -19,7 +21,18 @@ from .ring_buffer import RingBuffer # noqa: F401
if sys.version_info.major == 3 and sys.version_info.minor >= 3:
from collections.abc import MutableMapping
else:
from collections import MutableMapping
from collections.abc import MutableMapping
def singleton(cls):
"""Make a class a Singleton class (only one instance)"""
@functools.wraps(cls)
def wrapper_singleton(*args, **kwargs):
if wrapper_singleton.instance is None:
wrapper_singleton.instance = cls(*args, **kwargs)
return wrapper_singleton.instance
wrapper_singleton.instance = None
return wrapper_singleton
def env(*vars, **kwargs):
@ -70,6 +83,16 @@ def rgb_from_name(name):
return red, green, blue
def hextriplet(colortuple):
"""Convert a color tuple to a hex triplet."""
return "#" + "".join(f"{i:02X}" for i in colortuple)
def hex_from_name(name):
"""Create a hex color from a string."""
return hextriplet(rgb_from_name(name))
def human_size(bytes, units=None):
"""Returns a human readable string representation of bytes"""
if not units:
@ -136,7 +159,6 @@ def parse_delta_str(s):
def load_entry_points(group):
"""Load all extensions registered to the given entry point group"""
print(f"Loading extensions for group {group}")
try:
import importlib_metadata
except ImportError:
@ -150,3 +172,47 @@ def load_entry_points(group):
except Exception as e:
print(f"Extension {ep.name} of group {group} failed to load with {e}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
def calculate_initial_compass_bearing(start, end):
if (type(start) != tuple) or (type(end) != tuple): # noqa: E721
raise TypeError("Only tuples are supported as arguments")
lat1 = math.radians(float(start[0]))
lat2 = math.radians(float(end[0]))
diff_long = math.radians(float(end[1]) - float(start[1]))
x = math.sin(diff_long) * math.cos(lat2)
y = math.cos(lat1) * math.sin(lat2) - (
math.sin(lat1)
* math.cos(lat2) * math.cos(diff_long)
)
initial_bearing = math.atan2(x, y)
# Now we have the initial bearing but math.atan2 return values
# from -180° to + 180° which is not what we want for a compass bearing
# The solution is to normalize the initial bearing as shown below
initial_bearing = math.degrees(initial_bearing)
compass_bearing = (initial_bearing + 360) % 360
return compass_bearing
def degrees_to_cardinal(bearing, full_string=False):
if full_string:
directions = [
"North", "North-Northeast", "Northeast", "East-Northeast", "East", "East-Southeast",
"Southeast", "South-Southeast", "South", "South-Southwest", "Southwest", "West-Southwest",
"West", "West-Northwest", "Northwest", "North-Northwest", "North",
]
else:
directions = [
"N", "NNE", "NE", "ENE", "E", "ESE",
"SE", "SSE", "S", "SSW", "SW", "WSW",
"W", "WNW", "NW", "NNW", "N",
]
cardinal = directions[round(bearing / 22.5)]
return cardinal

View File

@ -1,9 +1,13 @@
from multiprocessing import RawValue
import random
import threading
import wrapt
MAX_PACKET_ID = 9999
class PacketCounter:
"""
Global Packet id counter class.
@ -17,19 +21,18 @@ class PacketCounter:
"""
_instance = None
max_count = 9999
lock = threading.Lock()
def __new__(cls, *args, **kwargs):
"""Make this a singleton class."""
if cls._instance is None:
cls._instance = super().__new__(cls, *args, **kwargs)
cls._instance.val = RawValue("i", 1)
cls._instance.val = RawValue("i", random.randint(1, MAX_PACKET_ID))
return cls._instance
@wrapt.synchronized(lock)
def increment(self):
if self.val.value == self.max_count:
if self.val.value == MAX_PACKET_ID:
self.val.value = 1
else:
self.val.value += 1

View File

@ -3,6 +3,8 @@ import decimal
import json
import sys
from aprsd.packets import core
class EnhancedJSONEncoder(json.JSONEncoder):
def default(self, obj):
@ -42,6 +44,24 @@ class EnhancedJSONEncoder(json.JSONEncoder):
return super().default(obj)
class SimpleJSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
elif isinstance(obj, datetime.date):
return str(obj)
elif isinstance(obj, datetime.time):
return str(obj)
elif isinstance(obj, datetime.timedelta):
return str(obj)
elif isinstance(obj, decimal.Decimal):
return str(obj)
elif isinstance(obj, core.Packet):
return obj.to_dict()
else:
return super().default(obj)
class EnhancedJSONDecoder(json.JSONDecoder):
def __init__(self, *args, **kwargs):

View File

@ -2,6 +2,7 @@ import logging
import os
import pathlib
import pickle
import threading
from oslo_config import cfg
@ -25,19 +26,28 @@ class ObjectStoreMixin:
aprsd server -f (flush) will wipe all saved objects.
"""
def __init__(self):
self.lock = threading.RLock()
def __len__(self):
return len(self.data)
with self.lock:
return len(self.data)
def __iter__(self):
return iter(self.data)
with self.lock:
return iter(self.data)
def get_all(self):
with self.lock:
return self.data
def get(self, id):
def get(self, key):
with self.lock:
return self.data[id]
return self.data.get(key)
def copy(self):
with self.lock:
return self.data.copy()
def _init_store(self):
if not CONF.enable_save:
@ -58,31 +68,26 @@ class ObjectStoreMixin:
self.__class__.__name__.lower(),
)
def _dump(self):
dump = {}
with self.lock:
for key in self.data.keys():
dump[key] = self.data[key]
return dump
def save(self):
"""Save any queued to disk?"""
if not CONF.enable_save:
return
self._init_store()
save_filename = self._save_filename()
if len(self) > 0:
LOG.info(
f"{self.__class__.__name__}::Saving"
f" {len(self)} entries to disk at"
f"{CONF.save_location}",
f" {len(self)} entries to disk at "
f"{save_filename}",
)
with open(self._save_filename(), "wb+") as fp:
pickle.dump(self._dump(), fp)
with self.lock:
with open(save_filename, "wb+") as fp:
pickle.dump(self.data, fp)
else:
LOG.debug(
"{} Nothing to save, flushing old save file '{}'".format(
self.__class__.__name__,
self._save_filename(),
save_filename,
),
)
self.flush()

View File

@ -1,189 +1,4 @@
/* PrismJS 1.24.1
https://prismjs.com/download.html#themes=prism-tomorrow&languages=markup+css+clike+javascript+log&plugins=show-language+toolbar */
/**
* prism.js tomorrow night eighties for JavaScript, CoffeeScript, CSS and HTML
* Based on https://github.com/chriskempson/tomorrow-theme
* @author Rose Pritchard
*/
code[class*="language-"],
pre[class*="language-"] {
color: #ccc;
background: none;
font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace;
font-size: 1em;
text-align: left;
white-space: pre;
word-spacing: normal;
word-break: normal;
word-wrap: normal;
line-height: 1.5;
-moz-tab-size: 4;
-o-tab-size: 4;
tab-size: 4;
-webkit-hyphens: none;
-moz-hyphens: none;
-ms-hyphens: none;
hyphens: none;
}
/* Code blocks */
pre[class*="language-"] {
padding: 1em;
margin: .5em 0;
overflow: auto;
}
:not(pre) > code[class*="language-"],
pre[class*="language-"] {
background: #2d2d2d;
}
/* Inline code */
:not(pre) > code[class*="language-"] {
padding: .1em;
border-radius: .3em;
white-space: normal;
}
.token.comment,
.token.block-comment,
.token.prolog,
.token.doctype,
.token.cdata {
color: #999;
}
.token.punctuation {
color: #ccc;
}
.token.tag,
.token.attr-name,
.token.namespace,
.token.deleted {
color: #e2777a;
}
.token.function-name {
color: #6196cc;
}
.token.boolean,
.token.number,
.token.function {
color: #f08d49;
}
.token.property,
.token.class-name,
.token.constant,
.token.symbol {
color: #f8c555;
}
.token.selector,
.token.important,
.token.atrule,
.token.keyword,
.token.builtin {
color: #cc99cd;
}
.token.string,
.token.char,
.token.attr-value,
.token.regex,
.token.variable {
color: #7ec699;
}
.token.operator,
.token.entity,
.token.url {
color: #67cdcc;
}
.token.important,
.token.bold {
font-weight: bold;
}
.token.italic {
font-style: italic;
}
.token.entity {
cursor: help;
}
.token.inserted {
color: green;
}
div.code-toolbar {
position: relative;
}
div.code-toolbar > .toolbar {
position: absolute;
top: .3em;
right: .2em;
transition: opacity 0.3s ease-in-out;
opacity: 0;
}
div.code-toolbar:hover > .toolbar {
opacity: 1;
}
/* Separate line b/c rules are thrown out if selector is invalid.
IE11 and old Edge versions don't support :focus-within. */
div.code-toolbar:focus-within > .toolbar {
opacity: 1;
}
div.code-toolbar > .toolbar > .toolbar-item {
display: inline-block;
}
div.code-toolbar > .toolbar > .toolbar-item > a {
cursor: pointer;
}
div.code-toolbar > .toolbar > .toolbar-item > button {
background: none;
border: 0;
color: inherit;
font: inherit;
line-height: normal;
overflow: visible;
padding: 0;
-webkit-user-select: none; /* for button */
-moz-user-select: none;
-ms-user-select: none;
}
div.code-toolbar > .toolbar > .toolbar-item > a,
div.code-toolbar > .toolbar > .toolbar-item > button,
div.code-toolbar > .toolbar > .toolbar-item > span {
color: #bbb;
font-size: .8em;
padding: 0 .5em;
background: #f5f2f0;
background: rgba(224, 224, 224, 0.2);
box-shadow: 0 2px 0 0 rgba(0,0,0,0.2);
border-radius: .5em;
}
div.code-toolbar > .toolbar > .toolbar-item > a:hover,
div.code-toolbar > .toolbar > .toolbar-item > a:focus,
div.code-toolbar > .toolbar > .toolbar-item > button:hover,
div.code-toolbar > .toolbar > .toolbar-item > button:focus,
div.code-toolbar > .toolbar > .toolbar-item > span:hover,
div.code-toolbar > .toolbar > .toolbar-item > span:focus {
color: inherit;
text-decoration: none;
}
/* PrismJS 1.29.0
https://prismjs.com/download.html#themes=prism-tomorrow&languages=markup+css+clike+javascript+json+json5+log&plugins=show-language+toolbar */
code[class*=language-],pre[class*=language-]{color:#ccc;background:0 0;font-family:Consolas,Monaco,'Andale Mono','Ubuntu Mono',monospace;font-size:1em;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.5;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}pre[class*=language-]{padding:1em;margin:.5em 0;overflow:auto}:not(pre)>code[class*=language-],pre[class*=language-]{background:#2d2d2d}:not(pre)>code[class*=language-]{padding:.1em;border-radius:.3em;white-space:normal}.token.block-comment,.token.cdata,.token.comment,.token.doctype,.token.prolog{color:#999}.token.punctuation{color:#ccc}.token.attr-name,.token.deleted,.token.namespace,.token.tag{color:#e2777a}.token.function-name{color:#6196cc}.token.boolean,.token.function,.token.number{color:#f08d49}.token.class-name,.token.constant,.token.property,.token.symbol{color:#f8c555}.token.atrule,.token.builtin,.token.important,.token.keyword,.token.selector{color:#cc99cd}.token.attr-value,.token.char,.token.regex,.token.string,.token.variable{color:#7ec699}.token.entity,.token.operator,.token.url{color:#67cdcc}.token.bold,.token.important{font-weight:700}.token.italic{font-style:italic}.token.entity{cursor:help}.token.inserted{color:green}
div.code-toolbar{position:relative}div.code-toolbar>.toolbar{position:absolute;z-index:10;top:.3em;right:.2em;transition:opacity .3s ease-in-out;opacity:0}div.code-toolbar:hover>.toolbar{opacity:1}div.code-toolbar:focus-within>.toolbar{opacity:1}div.code-toolbar>.toolbar>.toolbar-item{display:inline-block}div.code-toolbar>.toolbar>.toolbar-item>a{cursor:pointer}div.code-toolbar>.toolbar>.toolbar-item>button{background:0 0;border:0;color:inherit;font:inherit;line-height:normal;overflow:visible;padding:0;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none}div.code-toolbar>.toolbar>.toolbar-item>a,div.code-toolbar>.toolbar>.toolbar-item>button,div.code-toolbar>.toolbar>.toolbar-item>span{color:#bbb;font-size:.8em;padding:0 .5em;background:#f5f2f0;background:rgba(224,224,224,.2);box-shadow:0 2px 0 0 rgba(0,0,0,.2);border-radius:.5em}div.code-toolbar>.toolbar>.toolbar-item>a:focus,div.code-toolbar>.toolbar>.toolbar-item>a:hover,div.code-toolbar>.toolbar>.toolbar-item>button:focus,div.code-toolbar>.toolbar>.toolbar-item>button:hover,div.code-toolbar>.toolbar>.toolbar-item>span:focus,div.code-toolbar>.toolbar>.toolbar-item>span:hover{color:inherit;text-decoration:none}

View File

@ -219,15 +219,17 @@ function updateQuadData(chart, label, first, second, third, fourth) {
}
function update_stats( data ) {
our_callsign = data["stats"]["aprsd"]["callsign"];
$("#version").text( data["stats"]["aprsd"]["version"] );
our_callsign = data["APRSDStats"]["callsign"];
$("#version").text( data["APRSDStats"]["version"] );
$("#aprs_connection").html( data["aprs_connection"] );
$("#uptime").text( "uptime: " + data["stats"]["aprsd"]["uptime"] );
$("#uptime").text( "uptime: " + data["APRSDStats"]["uptime"] );
const html_pretty = Prism.highlight(JSON.stringify(data, null, '\t'), Prism.languages.json, 'json');
$("#jsonstats").html(html_pretty);
short_time = data["time"].split(/\s(.+)/)[1];
updateDualData(packets_chart, short_time, data["stats"]["packets"]["sent"], data["stats"]["packets"]["received"]);
updateQuadData(message_chart, short_time, data["stats"]["messages"]["sent"], data["stats"]["messages"]["received"], data["stats"]["messages"]["ack_sent"], data["stats"]["messages"]["ack_recieved"]);
updateDualData(email_chart, short_time, data["stats"]["email"]["sent"], data["stats"]["email"]["recieved"]);
updateDualData(memory_chart, short_time, data["stats"]["aprsd"]["memory_peak"], data["stats"]["aprsd"]["memory_current"]);
packet_list = data["PacketList"]["packets"];
updateDualData(packets_chart, short_time, data["PacketList"]["sent"], data["PacketList"]["received"]);
updateQuadData(message_chart, short_time, packet_list["MessagePacket"]["tx"], packet_list["MessagePacket"]["rx"],
packet_list["AckPacket"]["tx"], packet_list["AckPacket"]["rx"]);
updateDualData(email_chart, short_time, data["EmailStats"]["sent"], data["EmailStats"]["recieved"]);
updateDualData(memory_chart, short_time, data["APRSDStats"]["memory_peak"], data["APRSDStats"]["memory_current"]);
}

View File

@ -8,6 +8,8 @@ var packet_types_data = {};
var mem_current = []
var mem_peak = []
var thread_current = []
function start_charts() {
console.log("start_charts() called");
@ -17,6 +19,7 @@ function start_charts() {
create_messages_chart();
create_ack_chart();
create_memory_chart();
create_thread_chart();
}
@ -258,6 +261,49 @@ function create_memory_chart() {
memory_chart.setOption(option);
}
function create_thread_chart() {
thread_canvas = document.getElementById('threadChart');
thread_chart = echarts.init(thread_canvas);
// Specify the configuration items and data for the chart
var option = {
title: {
text: 'Active Threads'
},
legend: {},
tooltip: {
trigger: 'axis'
},
toolbox: {
show: true,
feature: {
mark : {show: true},
dataView : {show: true, readOnly: false},
magicType : {show: true, type: ['line', 'bar']},
restore : {show: true},
saveAsImage : {show: true}
}
},
calculable: true,
xAxis: { type: 'time' },
yAxis: { },
series: [
{
name: 'current',
type: 'line',
smooth: true,
color: 'red',
encode: {
x: 'timestamp',
y: 'current' // refer sensor 1 value
}
}
]
};
thread_chart.setOption(option);
}
@ -327,7 +373,6 @@ function updatePacketTypesChart() {
option = {
series: series
}
console.log(option)
packet_types_chart.setOption(option);
}
@ -372,6 +417,21 @@ function updateMemChart(time, current, peak) {
memory_chart.setOption(option);
}
function updateThreadChart(time, threads) {
keys = Object.keys(threads);
thread_count = keys.length;
thread_current.push([time, thread_count]);
option = {
series: [
{
name: 'current',
data: thread_current,
}
]
}
thread_chart.setOption(option);
}
function updateMessagesChart() {
updateTypeChart(message_chart, "MessagePacket")
}
@ -381,22 +441,24 @@ function updateAcksChart() {
}
function update_stats( data ) {
console.log(data);
our_callsign = data["stats"]["aprsd"]["callsign"];
$("#version").text( data["stats"]["aprsd"]["version"] );
$("#aprs_connection").html( data["aprs_connection"] );
$("#uptime").text( "uptime: " + data["stats"]["aprsd"]["uptime"] );
console.log("update_stats() echarts.js called")
stats = data["stats"];
our_callsign = stats["APRSDStats"]["callsign"];
$("#version").text( stats["APRSDStats"]["version"] );
$("#aprs_connection").html( stats["aprs_connection"] );
$("#uptime").text( "uptime: " + stats["APRSDStats"]["uptime"] );
const html_pretty = Prism.highlight(JSON.stringify(data, null, '\t'), Prism.languages.json, 'json');
$("#jsonstats").html(html_pretty);
t = Date.parse(data["time"]);
ts = new Date(t);
updatePacketData(packets_chart, ts, data["stats"]["packets"]["sent"], data["stats"]["packets"]["received"]);
updatePacketTypesData(ts, data["stats"]["packets"]["types"]);
updatePacketData(packets_chart, ts, stats["PacketList"]["tx"], stats["PacketList"]["rx"]);
updatePacketTypesData(ts, stats["PacketList"]["types"]);
updatePacketTypesChart();
updateMessagesChart();
updateAcksChart();
updateMemChart(ts, data["stats"]["aprsd"]["memory_current"], data["stats"]["aprsd"]["memory_peak"]);
updateMemChart(ts, stats["APRSDStats"]["memory_current"], stats["APRSDStats"]["memory_peak"]);
updateThreadChart(ts, stats["APRSDThreadList"]);
//updateQuadData(message_chart, short_time, data["stats"]["messages"]["sent"], data["stats"]["messages"]["received"], data["stats"]["messages"]["ack_sent"], data["stats"]["messages"]["ack_recieved"]);
//updateDualData(email_chart, short_time, data["stats"]["email"]["sent"], data["stats"]["email"]["recieved"]);
//updateDualData(memory_chart, short_time, data["stats"]["aprsd"]["memory_peak"], data["stats"]["aprsd"]["memory_current"]);

View File

@ -24,11 +24,15 @@ function ord(str){return str.charCodeAt(0);}
function update_watchlist( data ) {
// Update the watch list
// Update the watch list
stats = data["stats"];
if (stats.hasOwnProperty("WatchList") == false) {
return
}
var watchdiv = $("#watchDiv");
var html_str = '<table class="ui celled striped table"><thead><tr><th>HAM Callsign</th><th>Age since last seen by APRSD</th></tr></thead><tbody>'
watchdiv.html('')
jQuery.each(data["stats"]["aprsd"]["watch_list"], function(i, val) {
jQuery.each(stats["WatchList"], function(i, val) {
html_str += '<tr><td class="collapsing"><img id="callsign_'+i+'" class="aprsd_1"></img>' + i + '</td><td>' + val["last"] + '</td></tr>'
});
html_str += "</tbody></table>";
@ -60,12 +64,16 @@ function update_watchlist_from_packet(callsign, val) {
}
function update_seenlist( data ) {
stats = data["stats"];
if (stats.hasOwnProperty("SeenList") == false) {
return
}
var seendiv = $("#seenDiv");
var html_str = '<table class="ui celled striped table">'
html_str += '<thead><tr><th>HAM Callsign</th><th>Age since last seen by APRSD</th>'
html_str += '<th>Number of packets RX</th></tr></thead><tbody>'
seendiv.html('')
var seen_list = data["stats"]["aprsd"]["seen_list"]
var seen_list = stats["SeenList"]
var len = Object.keys(seen_list).length
$('#seen_count').html(len)
jQuery.each(seen_list, function(i, val) {
@ -79,6 +87,10 @@ function update_seenlist( data ) {
}
function update_plugins( data ) {
stats = data["stats"];
if (stats.hasOwnProperty("PluginManager") == false) {
return
}
var plugindiv = $("#pluginDiv");
var html_str = '<table class="ui celled striped table"><thead><tr>'
html_str += '<th>Plugin Name</th><th>Plugin Enabled?</th>'
@ -87,7 +99,7 @@ function update_plugins( data ) {
html_str += '</tr></thead><tbody>'
plugindiv.html('')
var plugins = data["stats"]["plugins"];
var plugins = stats["PluginManager"];
var keys = Object.keys(plugins);
keys.sort();
for (var i=0; i<keys.length; i++) { // now lets iterate in sort order
@ -101,14 +113,42 @@ function update_plugins( data ) {
plugindiv.append(html_str);
}
function update_threads( data ) {
stats = data["stats"];
if (stats.hasOwnProperty("APRSDThreadList") == false) {
return
}
var threadsdiv = $("#threadsDiv");
var countdiv = $("#thread_count");
var html_str = '<table class="ui celled striped table"><thead><tr>'
html_str += '<th>Thread Name</th><th>Alive?</th>'
html_str += '<th>Age</th><th>Loop Count</th>'
html_str += '</tr></thead><tbody>'
threadsdiv.html('')
var threads = stats["APRSDThreadList"];
var keys = Object.keys(threads);
countdiv.html(keys.length);
keys.sort();
for (var i=0; i<keys.length; i++) { // now lets iterate in sort order
var key = keys[i];
var val = threads[key];
html_str += '<tr><td class="collapsing">' + key + '</td>';
html_str += '<td>' + val["alive"] + '</td><td>' + val["age"] + '</td>';
html_str += '<td>' + val["loop_count"] + '</td></tr>';
}
html_str += "</tbody></table>";
threadsdiv.append(html_str);
}
function update_packets( data ) {
var packetsdiv = $("#packetsDiv");
//nuke the contents first, then add to it.
if (size_dict(packet_list) == 0 && size_dict(data) > 0) {
packetsdiv.html('')
}
jQuery.each(data, function(i, val) {
pkt = JSON.parse(val);
jQuery.each(data.packets, function(i, val) {
pkt = val;
update_watchlist_from_packet(pkt['from_call'], pkt);
if ( packet_list.hasOwnProperty(pkt['timestamp']) == false ) {
@ -167,6 +207,7 @@ function start_update() {
update_watchlist(data);
update_seenlist(data);
update_plugins(data);
update_threads(data);
},
complete: function() {
setTimeout(statsworker, 10000);

File diff suppressed because one or more lines are too long

View File

@ -1,57 +0,0 @@
/* Root element */
.json-document {
padding: 1em 2em;
}
/* Syntax highlighting for JSON objects */
ul.json-dict, ol.json-array {
list-style-type: none;
margin: 0 0 0 1px;
border-left: 1px dotted #ccc;
padding-left: 2em;
}
.json-string {
color: #0B7500;
}
.json-literal {
color: #1A01CC;
font-weight: bold;
}
/* Toggle button */
a.json-toggle {
position: relative;
color: inherit;
text-decoration: none;
}
a.json-toggle:focus {
outline: none;
}
a.json-toggle:before {
font-size: 1.1em;
color: #c0c0c0;
content: "\25BC"; /* down arrow */
position: absolute;
display: inline-block;
width: 1em;
text-align: center;
line-height: 1em;
left: -1.2em;
}
a.json-toggle:hover:before {
color: #aaa;
}
a.json-toggle.collapsed:before {
/* Use rotated down arrow, prevents right arrow appearing smaller than down arrow in some browsers */
transform: rotate(-90deg);
}
/* Collapsable placeholder links */
a.json-placeholder {
color: #aaa;
padding: 0 1em;
text-decoration: none;
}
a.json-placeholder:hover {
text-decoration: underline;
}

View File

@ -1,158 +0,0 @@
/**
* jQuery json-viewer
* @author: Alexandre Bodelot <alexandre.bodelot@gmail.com>
* @link: https://github.com/abodelot/jquery.json-viewer
*/
(function($) {
/**
* Check if arg is either an array with at least 1 element, or a dict with at least 1 key
* @return boolean
*/
function isCollapsable(arg) {
return arg instanceof Object && Object.keys(arg).length > 0;
}
/**
* Check if a string represents a valid url
* @return boolean
*/
function isUrl(string) {
var urlRegexp = /^(https?:\/\/|ftps?:\/\/)?([a-z0-9%-]+\.){1,}([a-z0-9-]+)?(:(\d{1,5}))?(\/([a-z0-9\-._~:/?#[\]@!$&'()*+,;=%]+)?)?$/i;
return urlRegexp.test(string);
}
/**
* Transform a json object into html representation
* @return string
*/
function json2html(json, options) {
var html = '';
if (typeof json === 'string') {
// Escape tags and quotes
json = json
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/'/g, '&apos;')
.replace(/"/g, '&quot;');
if (options.withLinks && isUrl(json)) {
html += '<a href="' + json + '" class="json-string" target="_blank">' + json + '</a>';
} else {
// Escape double quotes in the rendered non-URL string.
json = json.replace(/&quot;/g, '\\&quot;');
html += '<span class="json-string">"' + json + '"</span>';
}
} else if (typeof json === 'number') {
html += '<span class="json-literal">' + json + '</span>';
} else if (typeof json === 'boolean') {
html += '<span class="json-literal">' + json + '</span>';
} else if (json === null) {
html += '<span class="json-literal">null</span>';
} else if (json instanceof Array) {
if (json.length > 0) {
html += '[<ol class="json-array">';
for (var i = 0; i < json.length; ++i) {
html += '<li>';
// Add toggle button if item is collapsable
if (isCollapsable(json[i])) {
html += '<a href class="json-toggle"></a>';
}
html += json2html(json[i], options);
// Add comma if item is not last
if (i < json.length - 1) {
html += ',';
}
html += '</li>';
}
html += '</ol>]';
} else {
html += '[]';
}
} else if (typeof json === 'object') {
var keyCount = Object.keys(json).length;
if (keyCount > 0) {
html += '{<ul class="json-dict">';
for (var key in json) {
if (Object.prototype.hasOwnProperty.call(json, key)) {
html += '<li>';
var keyRepr = options.withQuotes ?
'<span class="json-string">"' + key + '"</span>' : key;
// Add toggle button if item is collapsable
if (isCollapsable(json[key])) {
html += '<a href class="json-toggle">' + keyRepr + '</a>';
} else {
html += keyRepr;
}
html += ': ' + json2html(json[key], options);
// Add comma if item is not last
if (--keyCount > 0) {
html += ',';
}
html += '</li>';
}
}
html += '</ul>}';
} else {
html += '{}';
}
}
return html;
}
/**
* jQuery plugin method
* @param json: a javascript object
* @param options: an optional options hash
*/
$.fn.jsonViewer = function(json, options) {
// Merge user options with default options
options = Object.assign({}, {
collapsed: false,
rootCollapsable: true,
withQuotes: false,
withLinks: true
}, options);
// jQuery chaining
return this.each(function() {
// Transform to HTML
var html = json2html(json, options);
if (options.rootCollapsable && isCollapsable(json)) {
html = '<a href class="json-toggle"></a>' + html;
}
// Insert HTML in target DOM element
$(this).html(html);
$(this).addClass('json-document');
// Bind click on toggle buttons
$(this).off('click');
$(this).on('click', 'a.json-toggle', function() {
var target = $(this).toggleClass('collapsed').siblings('ul.json-dict, ol.json-array');
target.toggle();
if (target.is(':visible')) {
target.siblings('.json-placeholder').remove();
} else {
var count = target.children('li').length;
var placeholder = count + (count > 1 ? ' items' : ' item');
target.after('<a href class="json-placeholder">' + placeholder + '</a>');
}
return false;
});
// Simulate click on toggle button when placeholder is clicked
$(this).on('click', 'a.json-placeholder', function() {
$(this).siblings('a.json-toggle').click();
return false;
});
if (options.collapsed == true) {
// Trigger click to collapse all nodes
$(this).find('a.json-toggle').click();
}
});
};
})(jQuery);

View File

@ -30,7 +30,6 @@
var color = Chart.helpers.color;
$(document).ready(function() {
console.log(initial_stats);
start_update();
start_charts();
init_messages();
@ -82,6 +81,7 @@
<div class="item" data-tab="seen-tab">Seen List</div>
<div class="item" data-tab="watch-tab">Watch List</div>
<div class="item" data-tab="plugin-tab">Plugins</div>
<div class="item" data-tab="threads-tab">Threads</div>
<div class="item" data-tab="config-tab">Config</div>
<div class="item" data-tab="log-tab">LogFile</div>
<!-- <div class="item" data-tab="oslo-tab">OSLO CONFIG</div> //-->
@ -97,11 +97,6 @@
<div class="ui segment" style="height: 300px" id="packetsChart"></div>
</div>
</div>
<div class="row">
<div class="column">
<div class="ui segment" style="height: 300px" id="packetTypesChart"></div>
</div>
</div>
<div class="row">
<div class="column">
<div class="ui segment" style="height: 300px" id="messagesChart"></div>
@ -112,8 +107,17 @@
</div>
<div class="row">
<div class="column">
<div class="ui segment" style="height: 300px" id="memChart">
</div>
<div class="ui segment" style="height: 300px" id="packetTypesChart"></div>
</div>
</div>
<div class="row">
<div class="column">
<div class="ui segment" style="height: 300px" id="threadChart"></div>
</div>
</div>
<div class="row">
<div class="column">
<div class="ui segment" style="height: 300px" id="memChart"></div>
</div>
</div>
<!-- <div class="row">
@ -156,6 +160,13 @@
<div id="pluginDiv" class="ui mini text">Loading</div>
</div>
<div class="ui bottom attached tab segment" data-tab="threads-tab">
<h3 class="ui dividing header">
Threads Loaded (<span id="thread_count">{{ thread_count }}</span>)
</h3>
<div id="threadsDiv" class="ui mini text">Loading</div>
</div>
<div class="ui bottom attached tab segment" data-tab="config-tab">
<h3 class="ui dividing header">Config</h3>
<pre id="configjson" class="language-json">{{ config_json|safe }}</pre>
@ -174,7 +185,7 @@
<div class="ui bottom attached tab segment" data-tab="raw-tab">
<h3 class="ui dividing header">Raw JSON</h3>
<pre id="jsonstats" class="language-yaml" style="height:600px;overflow-y:auto;">{{ stats|safe }}</pre>
<pre id="jsonstats" class="language-yaml" style="height:600px;overflow-y:auto;">{{ initial_stats|safe }}</pre>
</div>
<div class="ui text container">

View File

@ -64,9 +64,11 @@ function showError(error) {
function showPosition(position) {
console.log("showPosition Called");
path = $('#pkt_path option:selected').val();
msg = {
'latitude': position.coords.latitude,
'longitude': position.coords.longitude
'longitude': position.coords.longitude,
'path': path,
}
console.log(msg);
$.toast({

View File

@ -19,9 +19,10 @@ function show_aprs_icon(item, symbol) {
function ord(str){return str.charCodeAt(0);}
function update_stats( data ) {
$("#version").text( data["stats"]["aprsd"]["version"] );
console.log(data);
$("#version").text( data["stats"]["APRSDStats"]["version"] );
$("#aprs_connection").html( data["aprs_connection"] );
$("#uptime").text( "uptime: " + data["stats"]["aprsd"]["uptime"] );
$("#uptime").text( "uptime: " + data["stats"]["APRSDStats"]["uptime"] );
short_time = data["time"].split(/\s(.+)/)[1];
}
@ -37,7 +38,7 @@ function start_update() {
update_stats(data);
},
complete: function() {
setTimeout(statsworker, 10000);
setTimeout(statsworker, 60000);
}
});
})();

View File

@ -313,6 +313,7 @@ function create_callsign_tab(callsign, active=false) {
//item_html += '<button onClick="callsign_select(\''+callsign+'\');" callsign="'+callsign+'" class="nav-link '+active_str+'" id="'+tab_id+'" data-bs-toggle="tab" data-bs-target="#'+tab_content+'" type="button" role="tab" aria-controls="'+callsign+'" aria-selected="true">';
item_html += '<button onClick="callsign_select(\''+callsign+'\');" callsign="'+callsign+'" class="nav-link position-relative '+active_str+'" id="'+tab_id+'" data-bs-toggle="tab" data-bs-target="#'+tab_content+'" type="button" role="tab" aria-controls="'+callsign+'" aria-selected="true">';
item_html += callsign+'&nbsp;&nbsp;';
item_html += '<span id="'+tab_notify_id+'" class="position-absolute top-0 start-80 translate-middle badge bg-danger border border-light rounded-pill visually-hidden">0</span>';
item_html += '<span onclick="delete_tab(\''+callsign+'\');">×</span>';
item_html += '</button></li>'
@ -407,13 +408,15 @@ function append_message(callsign, msg, msg_html) {
tab_notify_id = tab_notification_id(callsign, true);
// get the current count of notifications
count = parseInt($(tab_notify_id).text());
if (isNaN(count)) {
count = 0;
}
count += 1;
$(tab_notify_id).text(count);
$(tab_notify_id).removeClass('visually-hidden');
}
// Find the right div to place the html
new_callsign = add_callsign(callsign, msg);
update_callsign_path(callsign, msg);
append_message_html(callsign, msg_html, new_callsign);
@ -502,7 +505,7 @@ function sent_msg(msg) {
msg_html = create_message_html(d, t, msg['from_call'], msg['to_call'], msg['message_text'], ack_id, msg, false);
append_message(msg['to_call'], msg, msg_html);
save_data();
scroll_main_content(msg['from_call']);
scroll_main_content(msg['to_call']);
}
function from_msg(msg) {

View File

@ -1,57 +0,0 @@
/* Root element */
.json-document {
padding: 1em 2em;
}
/* Syntax highlighting for JSON objects */
ul.json-dict, ol.json-array {
list-style-type: none;
margin: 0 0 0 1px;
border-left: 1px dotted #ccc;
padding-left: 2em;
}
.json-string {
color: #0B7500;
}
.json-literal {
color: #1A01CC;
font-weight: bold;
}
/* Toggle button */
a.json-toggle {
position: relative;
color: inherit;
text-decoration: none;
}
a.json-toggle:focus {
outline: none;
}
a.json-toggle:before {
font-size: 1.1em;
color: #c0c0c0;
content: "\25BC"; /* down arrow */
position: absolute;
display: inline-block;
width: 1em;
text-align: center;
line-height: 1em;
left: -1.2em;
}
a.json-toggle:hover:before {
color: #aaa;
}
a.json-toggle.collapsed:before {
/* Use rotated down arrow, prevents right arrow appearing smaller than down arrow in some browsers */
transform: rotate(-90deg);
}
/* Collapsable placeholder links */
a.json-placeholder {
color: #aaa;
padding: 0 1em;
text-decoration: none;
}
a.json-placeholder:hover {
text-decoration: underline;
}

View File

@ -1,158 +0,0 @@
/**
* jQuery json-viewer
* @author: Alexandre Bodelot <alexandre.bodelot@gmail.com>
* @link: https://github.com/abodelot/jquery.json-viewer
*/
(function($) {
/**
* Check if arg is either an array with at least 1 element, or a dict with at least 1 key
* @return boolean
*/
function isCollapsable(arg) {
return arg instanceof Object && Object.keys(arg).length > 0;
}
/**
* Check if a string represents a valid url
* @return boolean
*/
function isUrl(string) {
var urlRegexp = /^(https?:\/\/|ftps?:\/\/)?([a-z0-9%-]+\.){1,}([a-z0-9-]+)?(:(\d{1,5}))?(\/([a-z0-9\-._~:/?#[\]@!$&'()*+,;=%]+)?)?$/i;
return urlRegexp.test(string);
}
/**
* Transform a json object into html representation
* @return string
*/
function json2html(json, options) {
var html = '';
if (typeof json === 'string') {
// Escape tags and quotes
json = json
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/'/g, '&apos;')
.replace(/"/g, '&quot;');
if (options.withLinks && isUrl(json)) {
html += '<a href="' + json + '" class="json-string" target="_blank">' + json + '</a>';
} else {
// Escape double quotes in the rendered non-URL string.
json = json.replace(/&quot;/g, '\\&quot;');
html += '<span class="json-string">"' + json + '"</span>';
}
} else if (typeof json === 'number') {
html += '<span class="json-literal">' + json + '</span>';
} else if (typeof json === 'boolean') {
html += '<span class="json-literal">' + json + '</span>';
} else if (json === null) {
html += '<span class="json-literal">null</span>';
} else if (json instanceof Array) {
if (json.length > 0) {
html += '[<ol class="json-array">';
for (var i = 0; i < json.length; ++i) {
html += '<li>';
// Add toggle button if item is collapsable
if (isCollapsable(json[i])) {
html += '<a href class="json-toggle"></a>';
}
html += json2html(json[i], options);
// Add comma if item is not last
if (i < json.length - 1) {
html += ',';
}
html += '</li>';
}
html += '</ol>]';
} else {
html += '[]';
}
} else if (typeof json === 'object') {
var keyCount = Object.keys(json).length;
if (keyCount > 0) {
html += '{<ul class="json-dict">';
for (var key in json) {
if (Object.prototype.hasOwnProperty.call(json, key)) {
html += '<li>';
var keyRepr = options.withQuotes ?
'<span class="json-string">"' + key + '"</span>' : key;
// Add toggle button if item is collapsable
if (isCollapsable(json[key])) {
html += '<a href class="json-toggle">' + keyRepr + '</a>';
} else {
html += keyRepr;
}
html += ': ' + json2html(json[key], options);
// Add comma if item is not last
if (--keyCount > 0) {
html += ',';
}
html += '</li>';
}
}
html += '</ul>}';
} else {
html += '{}';
}
}
return html;
}
/**
* jQuery plugin method
* @param json: a javascript object
* @param options: an optional options hash
*/
$.fn.jsonViewer = function(json, options) {
// Merge user options with default options
options = Object.assign({}, {
collapsed: false,
rootCollapsable: true,
withQuotes: false,
withLinks: true
}, options);
// jQuery chaining
return this.each(function() {
// Transform to HTML
var html = json2html(json, options);
if (options.rootCollapsable && isCollapsable(json)) {
html = '<a href class="json-toggle"></a>' + html;
}
// Insert HTML in target DOM element
$(this).html(html);
$(this).addClass('json-document');
// Bind click on toggle buttons
$(this).off('click');
$(this).on('click', 'a.json-toggle', function() {
var target = $(this).toggleClass('collapsed').siblings('ul.json-dict, ol.json-array');
target.toggle();
if (target.is(':visible')) {
target.siblings('.json-placeholder').remove();
} else {
var count = target.children('li').length;
var placeholder = count + (count > 1 ? ' items' : ' item');
target.after('<a href class="json-placeholder">' + placeholder + '</a>');
}
return false;
});
// Simulate click on toggle button when placeholder is clicked
$(this).on('click', 'a.json-placeholder', function() {
$(this).siblings('a.json-toggle').click();
return false;
});
if (options.collapsed == true) {
// Trigger click to collapse all nodes
$(this).find('a.json-toggle').click();
}
});
};
})(jQuery);

View File

@ -103,6 +103,7 @@
<option value="WIDE1-1">WIDE1-1</option>
<option value="WIDE1-1,WIDE2-1">WIDE1-1,WIDE2-1</option>
<option value="ARISS">ARISS</option>
<option value="GATE">GATE</option>
</select>
</div>
<div class="col-sm-3">

View File

@ -3,10 +3,11 @@ import importlib.metadata as imp
import io
import json
import logging
import time
import os
import queue
import flask
from flask import Flask
from flask import Flask, request
from flask_httpauth import HTTPBasicAuth
from oslo_config import cfg, generator
import socketio
@ -15,14 +16,22 @@ from werkzeug.security import check_password_hash
import aprsd
from aprsd import cli_helper, client, conf, packets, plugin, threads
from aprsd.log import log
from aprsd.rpc import client as aprsd_rpc_client
from aprsd.threads import stats as stats_threads
from aprsd.utils import json as aprsd_json
CONF = cfg.CONF
LOG = logging.getLogger("gunicorn.access")
logging_queue = queue.Queue()
# ADMIN_COMMAND True means we are running from `aprsd admin`
# the `aprsd admin` command will import this file after setting
# the APRSD_ADMIN_COMMAND environment variable.
ADMIN_COMMAND = os.environ.get("APRSD_ADMIN_COMMAND", False)
auth = HTTPBasicAuth()
users = {}
users: dict[str, str] = {}
app = Flask(
"aprsd",
static_url_path="/static",
@ -45,114 +54,40 @@ def verify_password(username, password):
def _stats():
track = aprsd_rpc_client.RPCClient().get_packet_track()
stats_obj = stats_threads.StatsStore()
stats_obj.load()
now = datetime.datetime.now()
time_format = "%m-%d-%Y %H:%M:%S"
stats_dict = aprsd_rpc_client.RPCClient().get_stats_dict()
if not stats_dict:
stats_dict = {
"aprsd": {},
"aprs-is": {"server": ""},
"messages": {
"sent": 0,
"received": 0,
},
"email": {
"sent": 0,
"received": 0,
},
"seen_list": {
"sent": 0,
"received": 0,
},
}
# Convert the watch_list entries to age
wl = aprsd_rpc_client.RPCClient().get_watch_list()
new_list = {}
if wl:
for call in wl.get_all():
# call_date = datetime.datetime.strptime(
# str(wl.last_seen(call)),
# "%Y-%m-%d %H:%M:%S.%f",
# )
# We have to convert the RingBuffer to a real list
# so that json.dumps works.
# pkts = []
# for pkt in wl.get(call)["packets"].get():
# pkts.append(pkt)
new_list[call] = {
"last": wl.age(call),
# "packets": pkts
}
stats_dict["aprsd"]["watch_list"] = new_list
packet_list = aprsd_rpc_client.RPCClient().get_packet_list()
rx = tx = 0
types = {}
if packet_list:
rx = packet_list.total_rx()
tx = packet_list.total_tx()
types_copy = packet_list.types.copy()
for key in types_copy:
types[str(key)] = dict(types_copy[key])
stats_dict["packets"] = {
"sent": tx,
"received": rx,
"types": types,
}
if track:
size_tracker = len(track)
else:
size_tracker = 0
result = {
stats = {
"time": now.strftime(time_format),
"size_tracker": size_tracker,
"stats": stats_dict,
"stats": stats_obj.data,
}
return result
return stats
@app.route("/stats")
def stats():
LOG.debug("/stats called")
return json.dumps(_stats())
return json.dumps(_stats(), cls=aprsd_json.SimpleJSONEncoder)
@app.route("/")
def index():
stats = _stats()
wl = aprsd_rpc_client.RPCClient().get_watch_list()
if wl and wl.is_enabled():
watch_count = len(wl)
watch_age = wl.max_delta()
else:
watch_count = 0
watch_age = 0
sl = aprsd_rpc_client.RPCClient().get_seen_list()
if sl:
seen_count = len(sl)
else:
seen_count = 0
pm = plugin.PluginManager()
plugins = pm.get_plugins()
plugin_count = len(plugins)
client_stats = stats["stats"].get("APRSClientStats", {})
if CONF.aprs_network.enabled:
transport = "aprs-is"
if client_stats:
aprs_connection = client_stats.get("server_string", "")
else:
aprs_connection = "APRS-IS"
aprs_connection = (
"APRS-IS Server: <a href='http://status.aprs2.net' >"
"{}</a>".format(stats["stats"]["aprs-is"]["server"])
"{}</a>".format(aprs_connection)
)
else:
# We might be connected to a KISS socket?
@ -173,13 +108,20 @@ def index():
)
)
stats["transport"] = transport
stats["aprs_connection"] = aprs_connection
if client_stats:
stats["stats"]["APRSClientStats"]["transport"] = transport
stats["stats"]["APRSClientStats"]["aprs_connection"] = aprs_connection
entries = conf.conf_to_dict()
thread_info = stats["stats"].get("APRSDThreadList", {})
if thread_info:
thread_count = len(thread_info)
else:
thread_count = "unknown"
return flask.render_template(
"index.html",
initial_stats=stats,
initial_stats=json.dumps(stats, cls=aprsd_json.SimpleJSONEncoder),
aprs_connection=aprs_connection,
callsign=CONF.callsign,
version=aprsd.__version__,
@ -187,10 +129,8 @@ def index():
entries, indent=4,
sort_keys=True, default=str,
),
watch_count=watch_count,
watch_age=watch_age,
seen_count=seen_count,
plugin_count=plugin_count,
thread_count=thread_count,
# oslo_out=generate_oslo()
)
@ -209,19 +149,10 @@ def messages():
@auth.login_required
@app.route("/packets")
def get_packets():
LOG.debug("/packets called")
packet_list = aprsd_rpc_client.RPCClient().get_packet_list()
if packet_list:
tmp_list = []
pkts = packet_list.copy()
for key in pkts:
pkt = packet_list.get(key)
if pkt:
tmp_list.append(pkt.json)
return json.dumps(tmp_list)
else:
return json.dumps([])
stats = _stats()
stats_dict = stats["stats"]
packets = stats_dict.get("PacketList", {})
return json.dumps(packets, cls=aprsd_json.SimpleJSONEncoder)
@auth.login_required
@ -273,23 +204,34 @@ def save():
return json.dumps({"messages": "saved"})
@app.route("/log_entries", methods=["POST"])
def log_entries():
"""The url that the server can call to update the logs."""
entries = request.json
LOG.info(f"Log entries called {len(entries)}")
for entry in entries:
logging_queue.put(entry)
return json.dumps({"messages": "saved"})
class LogUpdateThread(threads.APRSDThread):
def __init__(self):
def __init__(self, logging_queue=None):
super().__init__("LogUpdate")
self.logging_queue = logging_queue
def loop(self):
if sio:
log_entries = aprsd_rpc_client.RPCClient().get_log_entries()
if log_entries:
LOG.info(f"Sending log entries! {len(log_entries)}")
for entry in log_entries:
try:
log_entry = self.logging_queue.get(block=True, timeout=1)
if log_entry:
sio.emit(
"log_entry", entry,
"log_entry",
log_entry,
namespace="/logs",
)
time.sleep(5)
except queue.Empty:
pass
return True
@ -297,17 +239,17 @@ class LoggingNamespace(socketio.Namespace):
log_thread = None
def on_connect(self, sid, environ):
global sio
LOG.debug(f"LOG on_connect {sid}")
global sio, logging_queue
LOG.info(f"LOG on_connect {sid}")
sio.emit(
"connected", {"data": "/logs Connected"},
namespace="/logs",
)
self.log_thread = LogUpdateThread()
self.log_thread = LogUpdateThread(logging_queue=logging_queue)
self.log_thread.start()
def on_disconnect(self, sid):
LOG.debug(f"LOG Disconnected {sid}")
LOG.info(f"LOG Disconnected {sid}")
if self.log_thread:
self.log_thread.stop()
@ -332,8 +274,8 @@ if __name__ == "__main__":
async_mode = "threading"
sio = socketio.Server(logger=True, async_mode=async_mode)
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)
log_level = init_app(log_level="DEBUG")
log.setup_logging(app, log_level)
log_level = init_app()
log.setup_logging(log_level)
sio.register_namespace(LoggingNamespace("/logs"))
CONF.log_opt_values(LOG, logging.DEBUG)
app.run(
@ -352,17 +294,17 @@ if __name__ == "uwsgi_file_aprsd_wsgi":
sio = socketio.Server(logger=True, async_mode=async_mode)
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)
log_level = init_app(
log_level="DEBUG",
# log_level="DEBUG",
config_file="/config/aprsd.conf",
# Commented out for local development.
# config_file=cli_helper.DEFAULT_CONFIG_FILE
)
log.setup_logging(app, log_level)
log.setup_logging(log_level)
sio.register_namespace(LoggingNamespace("/logs"))
CONF.log_opt_values(LOG, logging.DEBUG)
if __name__ == "aprsd.wsgi":
if __name__ == "aprsd.wsgi" and not ADMIN_COMMAND:
# set async_mode to 'threading', 'eventlet', 'gevent' or 'gevent_uwsgi' to
# force a mode else, the best mode is selected automatically from what's
# installed
@ -371,10 +313,10 @@ if __name__ == "aprsd.wsgi":
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)
log_level = init_app(
log_level="DEBUG",
# log_level="DEBUG",
config_file="/config/aprsd.conf",
# config_file=cli_helper.DEFAULT_CONFIG_FILE,
)
log.setup_logging(app, log_level)
log.setup_logging(log_level)
sio.register_namespace(LoggingNamespace("/logs"))
CONF.log_opt_values(LOG, logging.DEBUG)

View File

@ -1,84 +0,0 @@
#
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
# pip-compile --annotation-style=line dev-requirements.in
#
add-trailing-comma==3.1.0 # via gray
alabaster==0.7.16 # via sphinx
autoflake==1.5.3 # via gray
babel==2.14.0 # via sphinx
black==24.2.0 # via gray
build==1.1.1 # via pip-tools
cachetools==5.3.3 # via tox
certifi==2024.2.2 # via requests
cfgv==3.4.0 # via pre-commit
chardet==5.2.0 # via tox
charset-normalizer==3.3.2 # via requests
click==8.1.7 # via black, fixit, moreorless, pip-tools
colorama==0.4.6 # via tox
commonmark==0.9.1 # via rich
configargparse==1.7 # via gray
coverage[toml]==7.4.3 # via pytest-cov
distlib==0.3.8 # via virtualenv
docutils==0.20.1 # via sphinx
exceptiongroup==1.2.0 # via pytest
filelock==3.13.1 # via tox, virtualenv
fixit==2.1.0 # via gray
flake8==7.0.0 # via -r dev-requirements.in, pep8-naming
gray==0.14.0 # via -r dev-requirements.in
identify==2.5.35 # via pre-commit
idna==3.6 # via requests
imagesize==1.4.1 # via sphinx
iniconfig==2.0.0 # via pytest
isort==5.13.2 # via -r dev-requirements.in, gray
jinja2==3.1.3 # via sphinx
libcst==1.2.0 # via fixit
markupsafe==2.1.5 # via jinja2
mccabe==0.7.0 # via flake8
moreorless==0.4.0 # via fixit
mypy==1.8.0 # via -r dev-requirements.in
mypy-extensions==1.0.0 # via black, mypy, typing-inspect
nodeenv==1.8.0 # via pre-commit
packaging==23.2 # via black, build, fixit, pyproject-api, pytest, sphinx, tox
pathspec==0.12.1 # via black, trailrunner
pep8-naming==0.13.3 # via -r dev-requirements.in
pip-tools==7.4.1 # via -r dev-requirements.in
platformdirs==4.2.0 # via black, tox, virtualenv
pluggy==1.4.0 # via pytest, tox
pre-commit==3.6.2 # via -r dev-requirements.in
pycodestyle==2.11.1 # via flake8
pyflakes==3.2.0 # via autoflake, flake8
pygments==2.17.2 # via rich, sphinx
pyproject-api==1.6.1 # via tox
pyproject-hooks==1.0.0 # via build, pip-tools
pytest==8.0.2 # via -r dev-requirements.in, pytest-cov
pytest-cov==4.1.0 # via -r dev-requirements.in
pyupgrade==3.15.1 # via gray
pyyaml==6.0.1 # via libcst, pre-commit
requests==2.31.0 # via sphinx
rich==12.6.0 # via gray
snowballstemmer==2.2.0 # via sphinx
sphinx==7.2.6 # via -r dev-requirements.in
sphinxcontrib-applehelp==1.0.8 # via sphinx
sphinxcontrib-devhelp==1.0.6 # via sphinx
sphinxcontrib-htmlhelp==2.0.5 # via sphinx
sphinxcontrib-jsmath==1.0.1 # via sphinx
sphinxcontrib-qthelp==1.0.7 # via sphinx
sphinxcontrib-serializinghtml==1.1.10 # via sphinx
tokenize-rt==5.2.0 # via add-trailing-comma, pyupgrade
toml==0.10.2 # via autoflake
tomli==2.0.1 # via black, build, coverage, fixit, mypy, pip-tools, pyproject-api, pyproject-hooks, pytest, tox
tox==4.14.0 # via -r dev-requirements.in
trailrunner==1.4.0 # via fixit
typing-extensions==4.10.0 # via black, libcst, mypy, typing-inspect
typing-inspect==0.9.0 # via libcst
unify==0.5 # via gray
untokenize==0.1.1 # via unify
urllib3==2.2.1 # via requests
virtualenv==20.25.1 # via pre-commit, tox
wheel==0.42.0 # via pip-tools
# The following packages are considered to be unsafe in a requirements file:
# pip
# setuptools

View File

@ -1,10 +1,18 @@
FROM python:3.11-slim as build
FROM python:3.11-slim AS build
ARG VERSION=3.1.0
ARG VERSION=3.4.0
# pass this in as 'dev' if you want to install from github repo vs pypi
ARG INSTALL_TYPE=pypi
ARG BRANCH=master
ARG BUILDX_QEMU_ENV
ENV APRSD_BRANCH=${BRANCH:-master}
ENV TZ=${TZ:-US/Eastern}
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENV APRSD_PIP_VERSION=${VERSION}
ENV PATH="${PATH}:/app/.local/bin"
ENV PIP_DEFAULT_TIMEOUT=100 \
# Allow statements and log messages to immediately appear
@ -19,6 +27,7 @@ RUN set -ex \
# Create a non-root user
&& addgroup --system --gid 1001 appgroup \
&& useradd --uid 1001 --gid 1001 -s /usr/bin/bash -m -d /app appuser \
&& usermod -aG sudo appuser \
# Upgrade the package index and install security upgrades
&& apt-get update \
&& apt-get upgrade -y \
@ -31,29 +40,38 @@ RUN set -ex \
### Final stage
FROM build as final
FROM build AS install
WORKDIR /app
RUN pip3 install aprsd==$APRSD_PIP_VERSION
RUN pip install gevent uwsgi
RUN which aprsd
RUN pip3 install -U pip
RUN mkdir /config
RUN chown -R appuser:appgroup /app
RUN chown -R appuser:appgroup /config
USER appuser
RUN echo "PATH=\$PATH:/usr/games" >> /app/.bashrc
RUN if [ "$INSTALL_TYPE" = "pypi" ]; then \
pip3 install aprsd==$APRSD_PIP_VERSION; \
elif [ "$INSTALL_TYPE" = "github" ]; then \
git clone -b $APRSD_BRANCH https://github.com/craigerl/aprsd; \
cd /app/aprsd && pip install .; \
ls -al /app/.local/lib/python3.11/site-packages/aprsd*; \
fi
RUN pip install gevent uwsgi
RUN echo "PATH=\$PATH:/usr/games:/app/.local/bin" >> /app/.bashrc
RUN which aprsd
RUN aprsd sample-config > /config/aprsd.conf
RUN aprsd --version
ADD bin/run.sh /app
ADD bin/listen.sh /app
ADD bin/setup.sh /app
ADD bin/admin.sh /app
FROM install AS final
# For the web admin interface
EXPOSE 8001
ENTRYPOINT ["/app/run.sh"]
VOLUME ["/config"]
ENTRYPOINT ["/app/setup.sh"]
CMD ["server"]
# Set the user to run the application
USER appuser

View File

@ -1,58 +0,0 @@
FROM python:3.11-slim as build
ARG BRANCH=master
ARG BUILDX_QEMU_ENV
ENV APRSD_BRANCH=${BRANCH:-master}
ENV PIP_DEFAULT_TIMEOUT=100 \
# Allow statements and log messages to immediately appear
PYTHONUNBUFFERED=1 \
# disable a pip version check to reduce run-time & log-spam
PIP_DISABLE_PIP_VERSION_CHECK=1 \
# cache is useless in docker image, so disable to reduce image size
PIP_NO_CACHE_DIR=1
RUN set -ex \
# Create a non-root user
&& addgroup --system --gid 1001 appgroup \
&& useradd --uid 1001 --gid 1001 -s /usr/bin/bash -m -d /app appuser \
# Upgrade the package index and install security upgrades
&& apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git build-essential curl libffi-dev fortune \
python3-dev libssl-dev libxml2-dev libxslt-dev telnet sudo \
# Install dependencies
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y
### Final stage
FROM build as final
WORKDIR /app
RUN git clone -b $APRSD_BRANCH https://github.com/craigerl/aprsd
RUN cd aprsd && pip install --no-cache-dir .
RUN pip install gevent uwsgi
RUN which aprsd
RUN mkdir /config
RUN chown -R appuser:appgroup /app
RUN chown -R appuser:appgroup /config
USER appuser
RUN echo "PATH=\$PATH:/usr/games" >> /app/.bashrc
RUN which aprsd
RUN aprsd sample-config > /config/aprsd.conf
ADD bin/run.sh /app
ADD bin/listen.sh /app
ADD bin/admin.sh /app
EXPOSE 8000
# CMD ["gunicorn", "aprsd.wsgi:app", "--host", "0.0.0.0", "--port", "8000"]
ENTRYPOINT ["/app/run.sh"]
VOLUME ["/config"]
# Set the user to run the application
USER appuser

50
docker/bin/setup.sh Executable file
View File

@ -0,0 +1,50 @@
#!/usr/bin/env bash
set -x
# The default command
# Override the command in docker-compose.yml to change
# what command you want to run in the container
COMMAND="server"
if [ ! -z "${@+x}" ]; then
COMMAND=$@
fi
if [ ! -z "${APRSD_PLUGINS}" ]; then
OLDIFS=$IFS
IFS=','
echo "Installing pypi plugins '$APRSD_PLUGINS'";
for plugin in ${APRSD_PLUGINS}; do
IFS=$OLDIFS
# call your procedure/other scripts here below
echo "Installing '$plugin'"
pip3 install --user $plugin
done
fi
if [ ! -z "${APRSD_EXTENSIONS}" ]; then
OLDIFS=$IFS
IFS=','
echo "Installing APRSD extensions from pypi '$APRSD_EXTENSIONS'";
for extension in ${APRSD_EXTENSIONS}; do
IFS=$OLDIFS
# call your procedure/other scripts here below
echo "Installing '$extension'"
pip3 install --user $extension
done
fi
if [ -z "${LOG_LEVEL}" ] || [[ ! "${LOG_LEVEL}" =~ ^(CRITICAL|ERROR|WARNING|INFO)$ ]]; then
LOG_LEVEL="DEBUG"
fi
echo "Log level is set to ${LOG_LEVEL}";
# check to see if there is a config file
APRSD_CONFIG="/config/aprsd.conf"
if [ ! -e "$APRSD_CONFIG" ]; then
echo "'$APRSD_CONFIG' File does not exist. Creating."
aprsd sample-config > $APRSD_CONFIG
fi
aprsd ${COMMAND} --config ${APRSD_CONFIG} --loglevel ${LOG_LEVEL}

View File

@ -26,7 +26,7 @@ DEV=0
REBUILD_BUILDX=0
TAG="latest"
BRANCH=${BRANCH:-master}
VERSION="3.0.0"
VERSION="3.3.4"
while getopts “hdart:b:v:” OPTION
do
@ -90,7 +90,8 @@ then
# Use this script to locally build the docker image
docker buildx build --push --platform $PLATFORMS \
-t hemna6969/aprsd:$TAG \
-f Dockerfile-dev --build-arg branch=$BRANCH \
--build-arg INSTALL_TYPE=github \
--build-arg branch=$BRANCH \
--build-arg BUILDX_QEMU_ENV=true \
--no-cache .
else
@ -101,6 +102,5 @@ else
--build-arg BUILDX_QEMU_ENV=true \
-t hemna6969/aprsd:$VERSION \
-t hemna6969/aprsd:$TAG \
-t hemna6969/aprsd:latest \
-f Dockerfile .
-t hemna6969/aprsd:latest .
fi

View File

@ -0,0 +1,37 @@
aprsd.client.drivers package
============================
Submodules
----------
aprsd.client.drivers.aprsis module
----------------------------------
.. automodule:: aprsd.client.drivers.aprsis
:members:
:undoc-members:
:show-inheritance:
aprsd.client.drivers.fake module
--------------------------------
.. automodule:: aprsd.client.drivers.fake
:members:
:undoc-members:
:show-inheritance:
aprsd.client.drivers.kiss module
--------------------------------
.. automodule:: aprsd.client.drivers.kiss
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aprsd.client.drivers
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,69 @@
aprsd.client package
====================
Subpackages
-----------
.. toctree::
:maxdepth: 4
aprsd.client.drivers
Submodules
----------
aprsd.client.aprsis module
--------------------------
.. automodule:: aprsd.client.aprsis
:members:
:undoc-members:
:show-inheritance:
aprsd.client.base module
------------------------
.. automodule:: aprsd.client.base
:members:
:undoc-members:
:show-inheritance:
aprsd.client.factory module
---------------------------
.. automodule:: aprsd.client.factory
:members:
:undoc-members:
:show-inheritance:
aprsd.client.fake module
------------------------
.. automodule:: aprsd.client.fake
:members:
:undoc-members:
:show-inheritance:
aprsd.client.kiss module
------------------------
.. automodule:: aprsd.client.kiss
:members:
:undoc-members:
:show-inheritance:
aprsd.client.stats module
-------------------------
.. automodule:: aprsd.client.stats
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aprsd.client
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,29 +0,0 @@
aprsd.clients package
=====================
Submodules
----------
aprsd.clients.aprsis module
---------------------------
.. automodule:: aprsd.clients.aprsis
:members:
:undoc-members:
:show-inheritance:
aprsd.clients.kiss module
-------------------------
.. automodule:: aprsd.clients.kiss
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aprsd.clients
:members:
:undoc-members:
:show-inheritance:

21
docs/apidoc/aprsd.log.rst Normal file
View File

@ -0,0 +1,21 @@
aprsd.log package
=================
Submodules
----------
aprsd.log.log module
--------------------
.. automodule:: aprsd.log.log
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aprsd.log
:members:
:undoc-members:
:show-inheritance:

View File

@ -4,6 +4,14 @@ aprsd.packets package
Submodules
----------
aprsd.packets.collector module
------------------------------
.. automodule:: aprsd.packets.collector
:members:
:undoc-members:
:show-inheritance:
aprsd.packets.core module
-------------------------
@ -12,6 +20,14 @@ aprsd.packets.core module
:undoc-members:
:show-inheritance:
aprsd.packets.log module
------------------------
.. automodule:: aprsd.packets.log
:members:
:undoc-members:
:show-inheritance:
aprsd.packets.packet\_list module
---------------------------------

View File

@ -44,14 +44,6 @@ aprsd.plugins.ping module
:undoc-members:
:show-inheritance:
aprsd.plugins.query module
--------------------------
.. automodule:: aprsd.plugins.query
:members:
:undoc-members:
:show-inheritance:
aprsd.plugins.time module
-------------------------

View File

@ -1,29 +0,0 @@
aprsd.rpc package
=================
Submodules
----------
aprsd.rpc.client module
-----------------------
.. automodule:: aprsd.rpc.client
:members:
:undoc-members:
:show-inheritance:
aprsd.rpc.server module
-----------------------
.. automodule:: aprsd.rpc.server
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aprsd.rpc
:members:
:undoc-members:
:show-inheritance:

View File

@ -7,13 +7,13 @@ Subpackages
.. toctree::
:maxdepth: 4
aprsd.clients
aprsd.client
aprsd.cmds
aprsd.conf
aprsd.log
aprsd.packets
aprsd.plugins
aprsd.rpc
aprsd.stats
aprsd.threads
aprsd.utils
aprsd.web
@ -29,14 +29,6 @@ aprsd.cli\_helper module
:undoc-members:
:show-inheritance:
aprsd.client module
-------------------
.. automodule:: aprsd.client
:members:
:undoc-members:
:show-inheritance:
aprsd.exception module
----------------------
@ -77,14 +69,6 @@ aprsd.plugin\_utils module
:undoc-members:
:show-inheritance:
aprsd.stats module
------------------
.. automodule:: aprsd.stats
:members:
:undoc-members:
:show-inheritance:
aprsd.wsgi module
-----------------

View File

@ -0,0 +1,29 @@
aprsd.stats package
===================
Submodules
----------
aprsd.stats.app module
----------------------
.. automodule:: aprsd.stats.app
:members:
:undoc-members:
:show-inheritance:
aprsd.stats.collector module
----------------------------
.. automodule:: aprsd.stats.collector
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aprsd.stats
:members:
:undoc-members:
:show-inheritance:

View File

@ -28,6 +28,14 @@ aprsd.threads.log\_monitor module
:undoc-members:
:show-inheritance:
aprsd.threads.registry module
-----------------------------
.. automodule:: aprsd.threads.registry
:members:
:undoc-members:
:show-inheritance:
aprsd.threads.rx module
-----------------------
@ -36,6 +44,14 @@ aprsd.threads.rx module
:undoc-members:
:show-inheritance:
aprsd.threads.stats module
--------------------------
.. automodule:: aprsd.threads.stats
:members:
:undoc-members:
:show-inheritance:
aprsd.threads.tx module
-----------------------

Some files were not shown because too many files have changed in this diff Show More