13 posts / 0 new
Last post
stoatwblr
auxbatmon oddities

Anyone know what this indicates? 

 

2023-01-21 15:29:41.686 GMT E (5222566) ovms-duktape: [lib/auxbatmon/auxbatmon.js:55] Error: alloc failed|    at [anon] (duk_hobject_props.c:1063) internal|    at push () native strict preventsyield|    at [anon] (lib/auxbatmon/auxbatmon.js:55) preventsyield|    at forEach () native strict preventsyield|    at ticker (lib/auxbatmon/auxbatmon.js:58)|    at callSubscriberWithImmediateExceptions (int/PubSub.js:1) strict|    at deliverMessage (int/PubSub.js:1) strict|    at c (int/PubSub.js:1) strict|    at publish (int/PubSub.js:1) strict|    at [anon] (int/PubSub.js:1) strict preventsyield


2023-01-21 15:30:41.686 GMT E (5282566) ovms-duktape: [lib/auxbatmon/auxbatmon.js:55] Error: alloc failed|    at [anon] (duk_hobject_props.c:1063) internal|    at push () native strict preventsyield|    at [anon] (lib/auxbatmon/auxbatmon.js:55) preventsyield|    at forEach () native strict preventsyield|    at ticker (lib/auxbatmon/auxbatmon.js:58)|    at callSubscriberWithImmediateExceptions (int/PubSub.js:1) strict|    at deliverMessage (int/PubSub.js:1) strict|    at c (int/PubSub.js:1) strict|    at publish (int/PubSub.js:1) strict|    at [anon] (int/PubSub.js:1) strict preventsyield

dexter
dexter's picture
auxbatmon oddities

"alloc failed" normally means the scripting framework ran out of memory.

If you configured a lower heap size as described here https://docs.openvehicles.com/en/latest/userguide/scripting.html#heap-memory you could try raising that. Use command "script meminfo" to see if that's the issue.

 

stoatwblr
I had a suspicion that might

I had a suspicion that might be the case. History was bumped to 168 hours (1 week) but heap wasn't touched (this was in order to better see discharge profiles)

 

 

stoatwblr
This is just after reboot and

This is just after reboot and attempting to load the 12V history web page (which failed)

 

OVMS# script meminfo
{
  "totalBytes": 524224,
  "usedBytes": 343520,
  "freeBytes": 180704,
  "largestFreeBytes": 99232,
  "memlib": "umm",
  "ummTotalEntries": 2232,
  "ummUsedEntries": 2191,
  "ummFreeEntries": 41,
  "ummTotalBlocks": 16382,
  "ummUsedBlocks": 10735,
  "ummFreeBlocks": 5647,
  "ummMaxFreeContiguousBlocks": 3101,
  "ummUsageMetric": 190,
  "ummFragmentationMetric": 42
}

 

interestingly the history page loaded a few minutes later, so the memory must be pretty close to the ragged edge of oblivion

here's what it's like with the 12V history page displaying

OVMS# script meminfo
{
  "totalBytes": 524224,
  "usedBytes": 353536,
  "freeBytes": 170688,
  "largestFreeBytes": 94272,
  "memlib": "umm",
  "ummTotalEntries": 2562,
  "ummUsedEntries": 2553,
  "ummFreeEntries": 9,
  "ummTotalBlocks": 16382,
  "ummUsedBlocks": 11048,
  "ummFreeBlocks": 5334,
  "ummMaxFreeContiguousBlocks": 2946,
  "ummUsageMetric": 207,
  "ummFragmentationMetric": 34
}

 

stoatwblr
hmmm. After bumping to 768kB

hmmm. After bumping to 768kB and issuing a reload, I see this:

 

OVMS# script meminfo
{
  "totalBytes": 786368,
  "usedBytes": 180256,
  "freeBytes": 606112,
  "largestFreeBytes": 593344,
  "memlib": "umm",
  "ummTotalEntries": 2325,
  "ummUsedEntries": 2266,
  "ummFreeEntries": 59,
  "ummTotalBlocks": 24574,
  "ummUsedBlocks": 5633,
  "ummFreeBlocks": 18941,
  "ummMaxFreeContiguousBlocks": 18542,
  "ummUsageMetric": 29,
  "ummFragmentationMetric": 3
}
W (568595) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568605) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568615) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568635) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568655) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568695) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568705) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected
W (568705) websocket: WebSocketHandler[0x3f8bcc6c]: job queue overflow detected

The job queue messages are utterly spamming everything else

The module rebooted itself about 2 minutes after this reload

stoatwblr
All seems stable after the

All seems stable after the reload, but the history file has been erased (no big deal, the voltage traces showed it was clearly in error and only being recorded intermittently )

For some reason the Trio configuration is not picking up or recording DCDC 12V charger current (which will make checking things more difficult when I refit to the car) and there's no direct measurement of actual 12V battery net input/output available without fitting more instrumentation to the car (a 12V CAN-conneccted coulombometer would be ideal, of course, but these run to about €300 apiece and wouldn't work when keyed-off unless there's some way of storing/flushing data when power returns)

 

 

stoatwblr
another day another fw

another day another fw upgrade

With 6.5 days worth of data in the log, 768kB turns out to be not enough and the dreaded alloc error message has returned after rebooting

What's interesting is the amount of free ram compared to the last time around

 

OVMS# script meminfo
{
  "totalBytes": 786368,
  "usedBytes": 451328,
  "freeBytes": 335040,
  "largestFreeBytes": 54752,
  "memlib": "umm",
  "ummTotalEntries": 2217,
  "ummUsedEntries": 2170,
  "ummFreeEntries": 47,
  "ummTotalBlocks": 24574,
  "ummUsedBlocks": 14104,
  "ummFreeBlocks": 10470,
  "ummMaxFreeContiguousBlocks": 1711,
  "ummUsageMetric": 134,
  "ummFragmentationMetric": 65
}
E (542523) ovms-duktape: [lib/auxbatmon/auxbatmon.js:55] Error: alloc failed
……:     at [anon] (duk_hobject_props.c:1063) internal
……:     at push () native strict preventsyield
……:     at [anon] (lib/auxbatmon/auxbatmon.js:55) preventsyield
……:     at forEach () native strict preventsyield
……:     at ticker (lib/auxbatmon/auxbatmon.js:58)
……:     at callSubscriberWithImmediateExceptions (int/PubSub.js:1) strict
……:     at deliverMessage (int/PubSub.js:1) strict
……:     at c (int/PubSub.js:1) strict
……:     at publish (int/PubSub.js:1) strict
……:     at [anon] (int/PubSub.js:1) strict preventsyield

 

as with last time, increasing the memory and issuing a script reset resulted in a mountain of errors and the module needed rebooting

 

E (814333) uart_events: uart event type: 4
E (814333) uart_events: uart event type: 4
W (814483) websocket: WebSocketHandler[0x3f8f9f58]: job queue overflow detected
W (814493) websocket: WebSocketHandler[0x3f838398]: job queue overflow resolved, 78 drops

(hundreds of lines removed)

 

Having rebooted, the datafile was immediately readable - no data loss this time

OVMS# script meminfo
{
  "totalBytes": 1048480,
  "usedBytes": 450080,
  "freeBytes": 598400,
  "largestFreeBytes": 437184,
  "memlib": "umm",
  "ummTotalEntries": 2292,
  "ummUsedEntries": 2215,
  "ummFreeEntries": 77,
  "ummTotalBlocks": 32765,
  "ummUsedBlocks": 14065,
  "ummFreeBlocks": 18700,
  "ummMaxFreeContiguousBlocks": 13662,
  "ummUsageMetric": 75,
  "ummFragmentationMetric": 25
}

stoatwblr
Hmm. looking closer at the

Hmm. looking closer at the logfile, I can see it DID stop recording data for an unknown period and judging from the external observation of the test rig voltmeter I think it's lost about 18 hours

I haven't looked closely at the code yet

Michael, does the logfile actually include timestamps or just data points?

 

dexter
dexter's picture
Just data points to save RAM.

Just data points to save RAM. As data points are taken at a defined interval, it only needs to know the time of the last point, which is history["time"].

Be aware the memory usage info is dynamic due to garbage collection. GC is done automatically once per minute *after* processing the events, and can also be done manually by "script compact".

168 hours x 60 minutes x 5 metrics = 50,400 number values. At 8 bytes per value (JS number), that's 403,200 bytes for the core value storage. For temporary buffers and array management, I'd assume about double that size is needed in RAM capacity, so I recommend setting the RAM to 1 MB or reducing the sample frequency or removing some of the tracked metrics.

 

stoatwblr
> As data points are taken at

> As data points are taken at a defined interval, it only needs to know the time of the last point, which is history

Which is fine, until it stops collecting for any reason..... (such as 12V low power sleep) - and then all the previous timestamps are incorrect as they're merely calculated backwards from "now" and a gap is in effect silently "bridged"

If the values are 8 bytes, then is it possible to encode multiple values into them to reduce memory consumption?

stoatwblr
it looks like 1Mb might not

it looks like 1Mb might not be enough

this popped up whilst debugging and testing after rewiring the GPS antenna power

 

E (184318) ovms-duktape: [lib/auxbatmon/auxbatmon.js:62] Error: alloc failed
……:     at [anon] (duk_hbuffer_ops.c:64) internal
……:     at encode () native strict preventsyield
……:     at ticker (lib/auxbatmon/auxbatmon.js:62)
……:     at callSubscriberWithImmediateExceptions (int/PubSub.js:1) strict
……:     at deliverMessage (int/PubSub.js:1) strict
……:     at c (int/PubSub.js:1) strict
……:     at publish (int/PubSub.js:1) strict
……:     at [anon] (int/PubSub.js:1) strict preventsyield
W (184318) ovms-duktape: Duktape: event handling for 'ticker.60' took 2340 ms

dexter
dexter's picture
You can raise the memory or

You can raise the memory or rework the data storage. To reduce the memory footprint, I suggest evaluating typed arrays first:

These provide 32 bit floats. You may also be able to map readings to small integer elements by applying a fixed precisison. For example, the 12V reading multiplied by 10 will fit into a byte.

Regards,
Michael

stoatwblr
I was thinking of something

I was thinking of something similar. There's definitely room for improvement. An 8 byte float is both overkill for the purpose and a big memory hit

 

Log in or register to post comments
randomness