Commit 1159f926 authored by Bishop Bors's avatar Bishop Bors
Browse files

Merge pull request #346 from basho/mra/overload_killer

Incorporate an optional 'sink killer' for load shedding purposes

Reviewed-by: bsparrow435
Showing with 374 additions and 60 deletions
+374 -60
......@@ -25,6 +25,8 @@ Features
* Syslog style log level comparison flags
* Colored terminal output (requires R16+)
* Map support (requires 17+)
* Optional load shedding by setting a high water mark to kill (and reinstall)
a sink after a configurable cool down timer
Usage
-----
......@@ -189,14 +191,19 @@ for the backend:
]}.
```
Included is `lager_default_formatter`. This provides a generic, default formatting for log messages using a structure similar to Erlang's [iolist](http://learnyousomeerlang.com/buckets-of-sockets#io-lists) which we call "semi-iolist":
Included is `lager_default_formatter`. This provides a generic, default
formatting for log messages using a structure similar to Erlang's
[iolist](http://learnyousomeerlang.com/buckets-of-sockets#io-lists) which we
call "semi-iolist":
* Any traditional iolist elements in the configuration are printed verbatim.
* Atoms in the configuration are treated as placeholders for lager metadata and extracted from the log message.
* Atoms in the configuration are treated as placeholders for lager metadata and
extracted from the log message.
* The placeholders `date`, `time`, `message`, `sev` and `severity` will always exist.
* `sev` is an abbreviated severity which is interpreted as a capitalized single letter encoding of the severity level
(e.g. `'debug'` -> `$D`)
* The placeholders `pid`, `file`, `line`, `module`, `function`, and `node` will always exist if the parse transform is used.
* `sev` is an abbreviated severity which is interpreted as a capitalized
single letter encoding of the severity level (e.g. `'debug'` -> `$D`)
* The placeholders `pid`, `file`, `line`, `module`, `function`, and `node`
will always exist if the parse transform is used.
* Applications can define their own metadata placeholder.
* A tuple of `{atom(), semi-iolist()}` allows for a fallback for
the atom placeholder. If the value represented by the atom
......@@ -235,6 +242,7 @@ specified via the `crash_log_msg_size` application variable.
Messages from `error_logger` will be redirected to `error_logger_lager_event` sink
if it is defined so it can be redirected to another log file.
For example:
```
......@@ -249,15 +257,18 @@ For example:
}]
}].
```
Will send all `error_logger` messages to `error_logger.log` file.
will send all `error_logger` messages to `error_logger.log` file.
Overload Protection
-------------------
### Asynchronous mode
Prior to lager 2.0, the `gen_event` at the core of lager operated purely in
synchronous mode. Asynchronous mode is faster, but has no protection against
message queue overload. In lager 2.0, the `gen_event` takes a hybrid approach. it
polls its own mailbox size and toggles the messaging between synchronous and
asynchronous depending on mailbox size.
message queue overload. As of lager 2.0, the `gen_event` takes a hybrid
approach. it polls its own mailbox size and toggles the messaging between
synchronous and asynchronous depending on mailbox size.
```erlang
{async_threshold, 20},
......@@ -283,6 +294,31 @@ related processes crash, you can set a limit:
It is probably best to keep this number small.
### Sink Killer
In some high volume situations, it may be preferable to drop all pending log
messages instead of letting them drain over time.
If you prefer, you may choose to use the sink killer to shed load. In this
operational mode, if the `gen_event` mailbox exceeds a configurable
high water mark, the sink will be killed and reinstalled after a
configurable cool down time.
You can configure this behavior by using these configuration directives:
```erlang
{killer_hwm, 1000},
{killer_reinstall_after, 5000}
```
This means if the sink's mailbox size exceeds 1000 messages, kill the
entire sink and reload it after 5000 milliseconds. This behavior can
also be installed into alternative sinks if desired.
By default, the manager killer *is not installed* into any sink. If
the `killer_reinstall_after` cool down time is not specified it defaults
to 5000.
"Unsafe"
--------
The unsafe code pathway bypasses the normal lager formatting code and uses the
......
......@@ -52,3 +52,5 @@
{cover_enabled, true}.
{edoc_opts, [{stylesheet_file, "./priv/edoc.css"}]}.
{eunit_opts, [verbose]}.
......@@ -29,7 +29,8 @@
start/2,
start_handler/3,
configure_sink/2,
stop/1]).
stop/1,
boot/1]).
%% The `application:get_env/3` compatibility wrapper is useful
%% for other modules
......@@ -57,21 +58,19 @@ start_throttle(Sink, Threshold, Window) ->
[Sink, ?THROTTLE, [Threshold, Window]]),
ok.
determine_async_behavior(_Sink, {ok, undefined}, _Window) ->
ok;
determine_async_behavior(_Sink, undefined, _Window) ->
ok;
determine_async_behavior(_Sink, {ok, Threshold}, _Window) when not is_integer(Threshold) orelse Threshold < 0 ->
determine_async_behavior(_Sink, Threshold, _Window) when not is_integer(Threshold) orelse Threshold < 0 ->
error_logger:error_msg("Invalid value for 'async_threshold': ~p~n",
[Threshold]),
throw({error, bad_config});
determine_async_behavior(Sink, {ok, Threshold}, undefined) ->
determine_async_behavior(Sink, Threshold, undefined) ->
start_throttle(Sink, Threshold, erlang:trunc(Threshold * 0.2));
determine_async_behavior(_Sink, {ok, Threshold}, {ok, Window}) when not is_integer(Window) orelse Window > Threshold orelse Window < 0 ->
determine_async_behavior(_Sink, Threshold, Window) when not is_integer(Window) orelse Window > Threshold orelse Window < 0 ->
error_logger:error_msg(
"Invalid value for 'async_threshold_window': ~p~n", [Window]),
throw({error, bad_config});
determine_async_behavior(Sink, {ok, Threshold}, {ok, Window}) ->
determine_async_behavior(Sink, Threshold, Window) ->
start_throttle(Sink, Threshold, Window).
start_handlers(_Sink, undefined) ->
......@@ -123,14 +122,22 @@ clean_up_config_checks() ->
interpret_hwm(undefined) ->
undefined;
interpret_hwm({ok, undefined}) ->
undefined;
interpret_hwm({ok, HWM}) when not is_integer(HWM) orelse HWM < 0 ->
interpret_hwm(HWM) when not is_integer(HWM) orelse HWM < 0 ->
_ = lager:log(warning, self(), "Invalid error_logger high water mark: ~p, disabling", [HWM]),
undefined;
interpret_hwm({ok, HWM}) ->
interpret_hwm(HWM) ->
HWM.
maybe_install_sink_killer(_Sink, undefined, _ReinstallTimer) -> ok;
maybe_install_sink_killer(Sink, HWM, undefined) -> maybe_install_sink_killer(Sink, HWM, 5000);
maybe_install_sink_killer(Sink, HWM, ReinstallTimer) when is_integer(HWM) andalso is_integer(ReinstallTimer)
andalso HWM >= 0 andalso ReinstallTimer >= 0 ->
_ = supervisor:start_child(lager_handler_watcher_sup, [Sink, lager_manager_killer,
[HWM, ReinstallTimer]]);
maybe_install_sink_killer(_Sink, HWM, ReinstallTimer) ->
error_logger:error_msg("Invalid value for 'killer_hwm': ~p or 'killer_reinstall_after': ~p", [HWM, ReinstallTimer]),
throw({error, bad_config}).
start_error_logger_handler({ok, false}, _HWM, _Whitelist) ->
[];
start_error_logger_handler(_, HWM, undefined) ->
......@@ -139,7 +146,7 @@ start_error_logger_handler(_, HWM, {ok, WhiteList}) ->
GlStrategy = case application:get_env(lager, error_logger_groupleader_strategy) of
undefined ->
handle;
{ok, GlStrategy0} when
{ok, GlStrategy0} when
GlStrategy0 =:= handle;
GlStrategy0 =:= ignore;
GlStrategy0 =:= mirror ->
......@@ -151,23 +158,24 @@ start_error_logger_handler(_, HWM, {ok, WhiteList}) ->
throw({error, bad_config})
end,
case supervisor:start_child(lager_handler_watcher_sup, [error_logger, error_logger_lager_h, [HWM, GlStrategy]]) of
_ = case supervisor:start_child(lager_handler_watcher_sup, [error_logger, error_logger_lager_h, [HWM, GlStrategy]]) of
{ok, _} ->
[begin error_logger:delete_report_handler(X), X end ||
X <- gen_event:which_handlers(error_logger) -- [error_logger_lager_h | WhiteList]];
{error, _} ->
[]
end.
end,
%% `determine_async_behavior/3' is called with the results from either
%% `application:get_env/2' and `proplists:get_value/2'. Since
%% `application:get_env/2' wraps a successful retrieval in an `{ok,
%% Value}' tuple, do the same for the result from
%% `proplists:get_value/2'.
wrap_proplist_value(undefined) ->
undefined;
wrap_proplist_value(Value) ->
{ok, Value}.
Handlers = case application:get_env(lager, handlers) of
undefined ->
[{lager_console_backend, info},
{lager_file_backend, [{file, "log/error.log"}, {level, error}, {size, 10485760}, {date, "$D0"}, {count, 5}]},
{lager_file_backend, [{file, "log/console.log"}, {level, info}, {size, 10485760}, {date, "$D0"}, {count, 5}]}];
{ok, Val} ->
Val
end,
Handlers.
configure_sink(Sink, SinkDef) ->
lager_config:new_sink(Sink),
......@@ -177,12 +185,11 @@ configure_sink(Sink, SinkDef) ->
{gen_event, start_link,
[{local, Sink}]},
permanent, 5000, worker, dynamic}),
determine_async_behavior(Sink,
wrap_proplist_value(
proplists:get_value(async_threshold, SinkDef)),
wrap_proplist_value(
proplists:get_value(async_threshold_window, SinkDef))
determine_async_behavior(Sink, proplists:get_value(async_threshold, SinkDef),
proplists:get_value(async_threshold_window, SinkDef)
),
_ = maybe_install_sink_killer(Sink, proplists:get_value(killer_hwm, SinkDef),
proplists:get_value(killer_reinstall_after, SinkDef)),
start_handlers(Sink,
proplists:get_value(handlers, SinkDef, [])),
......@@ -193,6 +200,8 @@ configure_extra_sinks(Sinks) ->
lists:foreach(fun({Sink, Proplist}) -> configure_sink(Sink, Proplist) end,
Sinks).
get_env(Application, Key) ->
get_env(Application, Key, undefined).
%% R15 doesn't know about application:get_env/3
get_env(Application, Key, Default) ->
get_env_default(application:get_env(Application, Key),
......@@ -205,34 +214,50 @@ get_env_default({ok, Value}, _Default) ->
start(_StartType, _StartArgs) ->
{ok, Pid} = lager_sup:start_link(),
SavedHandlers = boot(),
_ = boot('__all_extra'),
_ = boot('__traces'),
clean_up_config_checks(),
{ok, Pid, SavedHandlers}.
boot() ->
%% Handle the default sink.
determine_async_behavior(?DEFAULT_SINK,
application:get_env(lager, async_threshold),
application:get_env(lager, async_threshold_window)),
get_env(lager, async_threshold),
get_env(lager, async_threshold_window)),
_ = maybe_install_sink_killer(?DEFAULT_SINK, get_env(lager, killer_hwm),
get_env(lager, killer_reinstall_after)),
start_handlers(?DEFAULT_SINK,
get_env(lager, handlers, ?DEFAULT_HANDLER_CONF)),
lager:update_loglevel_config(?DEFAULT_SINK),
SavedHandlers = start_error_logger_handler(
application:get_env(lager, error_logger_redirect),
interpret_hwm(application:get_env(lager, error_logger_hwm)),
application:get_env(lager, error_logger_whitelist)
get_env(lager, error_logger_redirect),
interpret_hwm(get_env(lager, error_logger_hwm)),
get_env(lager, error_logger_whitelist)
),
_ = lager_util:trace_filter(none),
SavedHandlers.
%% Now handle extra sinks
configure_extra_sinks(get_env(lager, extra_sinks, [])),
ok = add_configured_traces(),
boot('__traces') ->
_ = lager_util:trace_filter(none),
ok = add_configured_traces();
clean_up_config_checks(),
boot('__all_extra') ->
configure_extra_sinks(get_env(lager, extra_sinks, []));
{ok, Pid, SavedHandlers}.
boot(?DEFAULT_SINK) -> boot();
boot(Sink) ->
AllSinksDef = get_env(lager, extra_sinks, []),
boot_sink(Sink, lists:keyfind(Sink, 1, AllSinksDef)).
boot_sink(Sink, {Sink, Def}) ->
configure_sink(Sink, Def);
boot_sink(Sink, false) ->
configure_sink(Sink, []).
stop(Handlers) ->
lists:foreach(fun(Handler) ->
......
......@@ -520,7 +520,12 @@ filesystem_test_() ->
application:set_env(lager, handlers, [{lager_test_backend, info}]),
application:set_env(lager, error_logger_redirect, false),
application:set_env(lager, async_threshold, undefined),
lager:start()
lager:start(),
%% race condition where lager logs its own start up
%% makes several tests fail. See test/lager_test_backend
%% around line 800 for more information.
timer:sleep(5),
lager_test_backend:flush()
end,
fun(_) ->
file:delete("test.log"),
......@@ -866,7 +871,9 @@ formatting_test_() ->
application:load(lager),
application:set_env(lager, handlers, [{lager_test_backend, info}]),
application:set_env(lager, error_logger_redirect, false),
lager:start()
lager:start(),
%% same race condition issue
timer:sleep(5)
end,
fun(_) ->
file:delete("test.log"),
......
......@@ -61,6 +61,18 @@ handle_info({gen_event_EXIT, Module, normal}, #state{module=Module} = State) ->
{stop, normal, State};
handle_info({gen_event_EXIT, Module, shutdown}, #state{module=Module} = State) ->
{stop, normal, State};
handle_info({gen_event_EXIT, Module, {'EXIT', {kill_me, [_KillerHWM, KillerReinstallAfter]}}},
#state{module=Module, sink=Sink} = State) ->
%% Brutally kill the manager but stay alive to restore settings.
%%
%% SinkPid here means the gen_event process. Handlers *all* live inside the
%% same gen_event process space, so when the Pid is killed, *all* of the
%% pending log messages in its mailbox will die too.
SinkPid = whereis(Sink),
unlink(SinkPid),
exit(SinkPid, kill),
erlang:send_after(KillerReinstallAfter, self(), {reboot, Sink}),
{noreply, State};
handle_info({gen_event_EXIT, Module, Reason}, #state{module=Module,
config=Config, sink=Sink} = State) ->
case lager:log(error, self(), "Lager event handler ~p exited with reason ~s",
......@@ -75,6 +87,9 @@ handle_info({gen_event_EXIT, Module, Reason}, #state{module=Module,
handle_info(reinstall_handler, #state{module=Module, config=Config, sink=Sink} = State) ->
install_handler(Sink, Module, Config),
{noreply, State};
handle_info({reboot, Sink}, State) ->
_ = lager_app:boot(Sink),
{noreply, State};
handle_info(stop, State) ->
{stop, normal, State};
handle_info(_Info, State) ->
......@@ -135,11 +150,10 @@ reinstall_on_initial_failure_test_() ->
application:unset_env(lager, crash_log),
lager:start(),
try
?assertEqual(1, lager_test_backend:count()),
{_Level, _Time, Message, _Metadata} = lager_test_backend:pop(),
?assertMatch("Lager failed to install handler lager_crash_backend into lager_event, retrying later :"++_, lists:flatten(Message)),
?assertEqual(0, lager_test_backend:count()),
timer:sleep(6000),
lager_test_backend:flush(),
?assertEqual(0, lager_test_backend:count()),
?assert(lists:member(lager_crash_backend, gen_event:which_handlers(lager_event)))
after
......@@ -162,10 +176,10 @@ reinstall_on_runtime_failure_test_() ->
application:unset_env(lager, crash_log),
lager:start(),
try
?assertEqual(0, lager_test_backend:count()),
?assert(lists:member(lager_crash_backend, gen_event:which_handlers(lager_event))),
timer:sleep(6000),
?assertEqual(2, lager_test_backend:count()),
_ = lager_test_backend:pop(), %% throw away application start up message
_ = lager_test_backend:pop(), %% throw away gen_event crash message
{_Severity, _Date, Msg, _Metadata} = lager_test_backend:pop(),
?assertEqual("Lager event handler lager_crash_backend exited with reason crash", lists:flatten(Msg)),
{_Severity2, _Date2, Msg2, _Metadata2} = lager_test_backend:pop(),
......
-module(lager_manager_killer).
-author("Sungjin Park <jinni.park@gmail.com>").
-behavior(gen_event).
-export([init/1, handle_event/2, handle_call/2, handle_info/2, terminate/2, code_change/3]).
-export([kill_me/0]).
-include("lager.hrl").
-record(state, {
killer_hwm :: non_neg_integer(),
killer_reinstall_after :: non_neg_integer()
}).
kill_me() ->
gen_event:call(lager_event, ?MODULE, kill_self).
init([KillerHWM, KillerReinstallAfter]) ->
{ok, #state{killer_hwm=KillerHWM, killer_reinstall_after=KillerReinstallAfter}}.
handle_call(get_loglevel, State) ->
{ok, {mask, ?LOG_NONE}, State};
handle_call({set_loglevel, _Level}, State) ->
{ok, ok, State};
handle_call(get_settings, State = #state{killer_hwm=KillerHWM, killer_reinstall_after=KillerReinstallAfter}) ->
{ok, [KillerHWM, KillerReinstallAfter], State};
handle_call(kill_self, #state{killer_hwm=KillerHWM, killer_reinstall_after=KillerReinstallAfter}) ->
exit({kill_me, [KillerHWM, KillerReinstallAfter]});
handle_call(_Request, State) ->
{ok, ok, State}.
%% It's not the best idea in the world to check the queue length for every
%% log message. We can make this operation work on a poll timer in the
%% future.
handle_event({log, _Message}, State = #state{killer_hwm=KillerHWM, killer_reinstall_after=KillerReinstallAfter}) ->
{message_queue_len, Len} = process_info(self(), message_queue_len),
case Len > KillerHWM of
true ->
exit({kill_me, [KillerHWM, KillerReinstallAfter]});
_ ->
{ok, State}
end;
handle_event(_Event, State) ->
{ok, State}.
handle_info(_Info, State) ->
{ok, State}.
terminate(_Reason, _State) ->
ok.
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
-module(lager_manager_killer_test).
-author("Sungjin Park <jinni.park@gmail.com>").
-compile([{parse_transform, lager_transform}]).
-ifdef(TEST).
-include_lib("eunit/include/eunit.hrl").
-define(TEST_SINK_NAME, '__lager_test_sink'). %% <-- used by parse transform
-define(TEST_SINK_EVENT, '__lager_test_sink_lager_event'). %% <-- used by lager API calls and internals for gen_event
overload_test_() ->
{timeout, 60,
fun() ->
application:stop(lager),
application:load(lager),
Delay = 1000, % sleep 1 sec on every log
KillerHWM = 10, % kill the manager if there are more than 10 pending logs
KillerReinstallAfter = 1000, % reinstall killer after 1 sec
application:set_env(lager, handlers, [{lager_slow_backend, [{delay, Delay}]}]),
application:set_env(lager, async_threshold, undefined),
application:set_env(lager, error_logger_redirect, true),
application:set_env(lager, killer_hwm, KillerHWM),
application:set_env(lager, killer_reinstall_after, KillerReinstallAfter),
ensure_started(lager),
lager_config:set(async, true),
Manager = whereis(lager_event),
erlang:trace(all, true, [procs]),
[lager:info("~p'th message", [N]) || N <- lists:seq(1,KillerHWM+2)],
Margin = 100,
ok = confirm_manager_exit(Manager, Delay+Margin),
ok = confirm_sink_reregister(lager_event, Margin),
erlang:trace(all, false, [procs]),
wait_until(fun() ->
case proplists:get_value(lager_manager_killer, gen_event:which_handlers(lager_event)) of
[] -> false;
_ -> true
end
end, Margin, 15),
wait_until(fun() ->
case gen_event:call(lager_event, lager_manager_killer, get_settings) of
[KillerHWM, KillerReinstallAfter] -> true;
_Other -> false
end
end, Margin, 15),
application:stop(lager)
end}.
overload_alternate_sink_test_() ->
{timeout, 60,
fun() ->
application:stop(lager),
application:load(lager),
Delay = 1000, % sleep 1 sec on every log
KillerHWM = 10, % kill the manager if there are more than 10 pending logs
KillerReinstallAfter = 1000, % reinstall killer after 1 sec
application:set_env(lager, handlers, []),
application:set_env(lager, extra_sinks, [{?TEST_SINK_EVENT, [
{handlers, [{lager_slow_backend, [{delay, Delay}]}]},
{killer_hwm, KillerHWM},
{killer_reinstall_after, KillerReinstallAfter},
{async_threshold, undefined}
]}]),
application:set_env(lager, error_logger_redirect, true),
ensure_started(lager),
lager_config:set({?TEST_SINK_EVENT, async}, true),
Manager = whereis(?TEST_SINK_EVENT),
erlang:trace(all, true, [procs]),
[?TEST_SINK_NAME:info("~p'th message", [N]) || N <- lists:seq(1,KillerHWM+2)],
Margin = 100,
ok = confirm_manager_exit(Manager, Delay+Margin),
ok = confirm_sink_reregister(?TEST_SINK_EVENT, Margin),
erlang:trace(all, false, [procs]),
wait_until(fun() ->
case proplists:get_value(lager_manager_killer, gen_event:which_handlers(?TEST_SINK_EVENT)) of
[] -> false;
_ -> true
end
end, Margin, 15),
wait_until(fun() ->
case gen_event:call(?TEST_SINK_EVENT, lager_manager_killer, get_settings) of
[KillerHWM, KillerReinstallAfter] -> true;
_Other -> false
end
end, Margin, 15),
application:stop(lager)
end}.
ensure_started(App) ->
case application:start(App) of
ok ->
ok;
{error, {not_started, Dep}} ->
ensure_started(Dep),
ensure_started(App)
end.
confirm_manager_exit(Manager, Delay) ->
receive
{trace, Manager, exit, killed} ->
?debugFmt("Manager ~p killed", [Manager]);
Other ->
?debugFmt("OTHER MSG: ~p", [Other]),
confirm_manager_exit(Manager, Delay)
after Delay ->
?assert(false)
end.
confirm_sink_reregister(Sink, Delay) ->
receive
{trace, _Pid, register, Sink} ->
?assertNot(lists:member(lager_manager_killer, gen_event:which_handlers(Sink)))
after Delay ->
?assert(false)
end.
wait_until(_Fun, _Delay, 0) ->
{error, too_many_retries};
wait_until(Fun, Delay, Retries) ->
case Fun() of
true -> ok;
false -> timer:sleep(Delay), wait_until(Fun, Delay, Retries-1)
end.
-endif.
-module(lager_slow_backend).
-author("Sungjin Park <jinni.park@gmail.com>").
-behavior(gen_event).
-export([init/1, handle_call/2, handle_event/2, handle_info/2, terminate/2, code_change/3]).
-include("lager.hrl").
-record(state, {
delay :: non_neg_integer()
}).
init([{delay, Delay}]) ->
{ok, #state{delay=Delay}}.
handle_call(get_loglevel, State) ->
{ok, lager_util:config_to_mask(debug), State};
handle_call(_Request, State) ->
{ok, ok, State}.
handle_event({log, _Message}, State) ->
timer:sleep(State#state.delay),
{ok, State};
handle_event(_Event, State) ->
{ok, State}.
handle_info(_Info, State) ->
{ok, State}.
terminate(_Reason, _State) ->
ok.
code_change(_OldVsn, State, _Extra) ->
{ok, State}.
......@@ -515,6 +515,8 @@ lager_test_() ->
application:stop(lager),
application:set_env(lager, traces, [{lager_test_backend, [{foo, bar}], debug}]),
lager:start(),
timer:sleep(5),
flush(),
lager:debug([{foo, bar}], "hello world"),
?assertEqual(1, count()),
application:unset_env(lager, traces),
......@@ -537,6 +539,7 @@ lager_test_() ->
end
},
{"stopped trace stops and removes its event handler - default sink (gh#267)",
{timeout, 10,
fun() ->
Sink = ?DEFAULT_SINK,
StartHandlers = gen_event:which_handlers(Sink),
......@@ -560,14 +563,14 @@ lager_test_() ->
?assertEqual(length(StartGlobal)+1, length(lager_config:global_get(handlers))),
ok = lager:stop_trace(TestTrace2),
EndHandlers = gen_event:which_handlers(?DEFAULT_SINK),
EndHandlers = gen_event:which_handlers(Sink),
EndGlobal = lager_config:global_get(handlers),
{_, T3} = lager_config:get({Sink, loglevel}),
?assertEqual([], T3),
?assertEqual(StartHandlers, EndHandlers),
?assertEqual(StartGlobal, EndGlobal),
ok
end
end}
},
{"record printing works",
fun() ->
......@@ -796,6 +799,18 @@ setup() ->
application:set_env(lager, error_logger_redirect, false),
application:unset_env(lager, traces),
lager:start(),
%% There is a race condition between the application start up, lager logging its own
%% start up condition and several tests that count messages or parse the output of
%% tests. When the lager start up message wins the race, it causes these tests
%% which parse output or count message arrivals to fail.
%%
%% We introduce a sleep here to allow `flush' to arrive *after* the start up
%% message has been received and processed.
%%
%% This race condition was first exposed during the work on
%% 4b5260c4524688b545cc12da6baa2dfa4f2afec9 which introduced the lager
%% manager killer PR.
timer:sleep(5),
gen_event:call(lager_event, ?MODULE, flush).
cleanup(_) ->
......@@ -1555,7 +1570,8 @@ async_threshold_test_() ->
?assertEqual(true, lager_config:get(async)),
%% put a ton of things in the queue
Workers = [spawn_monitor(fun() -> [lager:info("hello world") || _ <- lists:seq(1, 1000)] end) || _ <- lists:seq(1, 15)],
Workers = [spawn_monitor(fun() -> [lager:info("hello world") || _ <- lists:seq(1, 100)] end)
|| _ <- lists:seq(1, 10)],
%% serialize on mailbox
_ = gen_event:which_handlers(lager_event),
......@@ -1573,7 +1589,9 @@ async_threshold_test_() ->
%% point in the past
?assertMatch([{sync_toggled, N}] when N > 0,
ets:lookup(async_threshold_test, sync_toggled)),
%% wait for all the workers to return, meaning that all the messages have been logged (since we're definitely in sync mode at the end of the run)
%% wait for all the workers to return, meaning that all
%% the messages have been logged (since we're
%% definitely in sync mode at the end of the run)
collect_workers(Workers),
%% serialize on the mailbox again
_ = gen_event:which_handlers(lager_event),
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment