38 Binary Options Brokers List – Reviews and Ratings!

Anyone knows good BTC/USD binary options broker ideally with API access? /r/Bitcoin

Anyone knows good BTC/USD binary options broker ideally with API access? /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Good place to trade bitcoin binary options through API interface?

I did some searching on google, and this subreddit and didn't find much that looked up-to-date or trustworthy....
Just looking for a reputable trading site that offers binary options and has an API for access. And, yes, I know that binary trading is essentially pure speculation. :)
submitted by sigma_noise to BitcoinMarkets [link] [comments]

I'm looking for a Python API to a binary options trading platform.

I have an algorithm for binary options trading, but I don't feel like manually working a GUI to do my trades.
Could someone point me to a resource for executing my trades via Python?
submitted by metaperl to Python [link] [comments]

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

How to name sync & async items?

How should I organize parallel sets of synchronous and asynchronous modules, structs, and functions?
  1. This doesn't compile:
    pub mod async; // keyword, no good pub mod sync; 
    I considered async_ and r#async but don't want to get punched.
  2. sync in std::sync means "synchronization" not "synchronous" so maybe that's not the best?
  3. Should I make default methods synchronous and add a suffix for async ones: open() and open_async()? (Async is the cool stuff, I don't like giving it the crappier name...)
  4. I've been suggested to make the async code the default and hide the sync stuff in a module.
    async fn open() -> io::Result; mod blocking { fn open() -> io::Result; } 
Other ideas? Are there any popular libraries that do both sync and async?
submitted by jkugelman to rust [link] [comments]

Working Intel WiFi + Bluetooth with itlwm

I can't believe I hadn't heard of this sooner! Thanks to u/myusrm for bringing it to my attention.
First, the WiFi.
itlwm is a Intel WiFi driver by zxystd on GitHub. It supports a range of Intel wifi cards.
This is possible because the driver is a port of OpenBSD's Intel driver, and it emulates an ethernet device (no AirDrop and the like with this, unfortunately).
There's a ton of info from zxystd on his Chinese, invite-only PCBeta thread, but it's hard to understand (and impossible to download the binaries), so I'll share what I've worked out:
There are three kexts available. These are all to be injected by the bootloader. The first, `itlwm.kext`, is for most Intel cards (like my 9560); a list is available on the GitHub README. The second, `itlwmx.kext`, is for newer WiFi 6 cards. The final kext is used to configure automatic connections (by editing the Info.plist); it's optional. The Info.plist files in the kexts can be modified with SSIDs and passwords to connect to on boot. I'm not sure what the third, itl80211.kext, is for - but I didn't need it.
There's also an optional app, HeliPort, to configure WiFi settings.
zxystd say they'll release binaries soon, but I've built some myself for those who want some prebuilts now: the kexts, and the app.
EDIT: Here are some newer (less tested) builds.
Now, the Bluetooth:
To get Bluetooth working, you can add the kexts from zxystd's repo to your bootloader. Don't put these in /Library/Extensions, as doing so can cause system instability.

I'm amazed that this exists - I thought it would never be possible to get Intel WiFi working at all. This ethernet method is probably the best we'll get, though, as Apple's WiFi APIs are completely undocumented and hard to work with.
(This works for me on macOS Big Sur 11.0 Beta (20A4299v), with an Intel Wireless 9560 card).

EDIT: Guys, please don't make GitHub issues because you can't work out how to build the binaries.
submitted by superl2 to hackintosh [link] [comments]

NanoFusion - Project Update and Next Steps

Build-Off Result

I'm sure some people will be wondering about the status of the NanoFusion project going forward. Naturally, the outcome of the Nano Build-Off was pretty disappointing for me personally. After initially receiving such a wave of positive feedback here on reddit, it was unfortunate to not even crack the top 20 projects.
In spite of that result, I think the community's desire to see a trustless privacy protocol in the Nano ecosystem is actually quite strong. I believe this Build-Off result is primarily a reflection of the judging criteria, which skewed strongly towards apps that were already somewhat polished, and able to be tested by one person within the space of 10 minutes. This naturally disfavours a project like NanoFusion which is still a proof-of-concept, and requires multiple participants in order to properly use it. All that to say, while I applaud the winning projects for their efforts, and extend my gratitude to Nanillionaire for sponsoring the event, I don't believe that the Build-Off result gives a full picture of the community's true priorities for future development of the Nano ecosystem.
Nevertheless this result points to a stark reality: NanoFusion is not yet ready for consumer use, not by a long shot.

What will it take for NanoFusion to be consumer-ready?

Protocol and Reference Implementation Status
There is a small amount of work to be done to finish the reference implementation of the protocol. The binary tree of input mix accounts has been constructed, but the code is not yet written to actually execute the mix, nor to trigger and execute refunds where necessary. That is really the last step that needs to be completed for the reference implementation, and it's not especially complicated. The tricky bit is that there are still a few bugs around communication between the clients that need ironing out. But those are relatively minor bugs, I'm confident they won't require fundamental changes to the protocol or the implementation architecture.
However, once the reference implementation is complete, that is where a whole new set of challenges begins.
Wallet Integration
The primary challenge will be to integrate NanoFusion into one or more popular wallets. For a privacy protocol to be most effective, we need as many people as possible using it. In a cryptocurrency like Nano, where transactions and addresses are all publicly visible on a block explorer, privacy is achieved by making it difficult to determine which transactions belong to you. Making it difficult is a matter of having your transactions get "lost in the crowd". The crowd of transactions that might potentially be yours is called the "anonymity set". We need that anonymity set to be as large as possible, which means we need as many people participating in Fusion events as possible.
The best way to achieve this is to get NanoFusion adopted by popular wallets, and ideally to have it enabled by default. The less decisions that a user needs to make in order to start participating, the better.
This raises one very important question. How do we make it as easy and appealing as possible for the developers of popular wallets to integrate this technology?
Workflow Design
In order to make NanoFusion integration appealing to wallet developers, I believe we need to gear NanoFusion integration around workflows that actually work for end-users of the wallet. This is not as simple as it appears.
The Nano ecosystem is currently geared around the assumption that addresses will tend to be re-used for many sends and receives. This is almost intrinsic to the ORV consensus mechanism. You keep your funds in one account, and the voting weight for that account is assigned to your representative.
In a UTXO-based cryptocurrency, BCH in particular, it is much more normal to use a separate subaddress for every incoming transaction. CashFusion on BCH works by taking all your different receive addresses and mixing the funds from those addresses together (along with the funds of many other people's subaddress sets). But on Nano it's different. Imagine you have an online store accepting Nano funds via BrainBlocks integration. If you receive 100 payments, you might have BrainBlocks forward them all to just one account that you own. But this makes it trivial for a customer to look at the block explorer and see all of your sales volume, which completely undermines your privacy.
In the context of something like BrainBlocks, it's easy to see how our e-commerce store could generate a new address for each transaction, and have BrainBlocks forward funds to that new address. Then we could run NanoFusion later to obscure the linkages between our individual sales. But what about addresses that are shared in public? Lots of people put up single Nano addresses to receive donations, etc. What does NanoFusion do with those? For NanoFusion to be most effective, a given user should NOT have just one input and one output account in the mix. It makes it too easy for their input and output accounts to be linked (at least to a moderate-to-high degree of probability) by the publicly visible amounts in the accounts.
For NanoFusion to be most effective, we need to develop a culture where it is normal for people to use a new address each time they receive some nano. How do we make it appealing for wallet developers to build their wallets this way? I don't really know. The only example of this pattern that I know is Nanonymous (https://github.com/LilleJohs/Nanonymous). We could potentially implement something like stealth addresses, so that the user really gives out one canonical public address, but a different receive address is actually used for each transaction "under the hood". However, that adds a whole new layer of complexity. It means wallets have to be upgraded to know how to interact with a stealth address.
API Design
Even if we could arrange things so that it was more common for individuals to have multiple input accounts to mix, we would still be left with another question. What would wallet developers want the API for NanoFusion to look like? By nature, NanoFusion requires a large number of messages to be sent back and forth between all of the mix participants. For security reasons, those messages cannot be sent all at once. Player A has to wait for Player B to send message 1 before it is safe (cryptographically) for Player A to reveal the content of message 2.
What should a library look like that manages that complexity on behalf of the wallet developer? What language should it be written in? I have begun this project under the assumption that the most common wallet-dev language would be javascript, but there may be cases where other platforms are needed.

Where To From Here?

Technical Reflections
Thinking through all of these practical challenges has given me a new perspective on the whole issue of cryptocurrency privacy protocols. I have a much greater respect for what has been achieved by the Monero project. In Monero, everyone actually uses the privacy protocol. As described above, that is no small accomplishment. Even though the privacy protocols for Dash, ZCash, BTC and BCH do basically work, their use is not widespread. Even leaving aside the issue of the extra transaction fees incurred (which is not such a problem for Nano), these optional privacy protocols are just not that convenient to use. Because not everyone uses them, the anonymity set is not nearly as large as it could be. And because not everyone uses them, transactions you do before and after a mix/fusion event leak metadata which can be used to undermine the privacy that you gained by using the privacy protocol in the first place.
Inevitably, NanoFusion will also suffer from this problem. Suppose that 20% of the Nano community starts regularly participating in fusions (a very generous estimate, given the low adoption rate of optional privacy features in the other cryptocurrencies mentioned). That still leaves the large majority of transactions probably re-using addresses most of the time. This means that the non-private majority will leak fresh metadata whenever they interact with accounts that were previously obscured through NanoFusion. This is not an easy problem to overcome. It can only be done with a culture shift towards ubiquitous privacy, and that can probably only be achieved by all major wallets agreeing to enable privacy features by default. Not an easy hill to climb.
Personal Circumstances
For the sake of transparency, I also want to mention that I will be stepping back from NanoFusion for a while. This is simply a necessity of life. Our first child will be born in a few months. Once that happens, I will obviously have a lot going on and much less time available to work on these kinds of side projects. Between now and then, I need to focus on other projects which have more potential to generate some income for my little family. I'm a dad now(!), and my family comes first.
I'm very glad to have (hopefully) contributed some useful groundwork for the process of bringing privacy to Nano. This project also gave me the chance to learn some new technologies at a much deeper level, I'm grateful that too. Neverthless, for the foreseeable future, I'll be stepping back. I don't make that decision lightly. I put a lot of blood, sweat and tears into bringing NanoFusion this far, so I definitely hope it doesn't just fall by the wayside. I hope others will pick it up and run with it in my absence.
Call to Action
Want to make NanoFusion happen? Here's what we really need next:
  1. Wallet Developers - we need you to speak up. Tell us, what would an ideal NanoFusion API look like? How can we make it as easy as possible for you to integrate NanoFusion into your wallet app? What programming language do you want to use to consume that API? What I would love to see is several wallet developers collaborating together to create a document describing their ideal API. That will make it much easier for potential developers to pick it up and start implementing it.
  2. Javascript developers - are any of you interested in stepping up and finishing off the last bits of the reference implementation for NanoFusion?
As always, details of the project are available at http://nanofusion.casa (including demo videos, technical whitepaper and the link to the GitHub repo).
God bless everyone, thank you to all those who have followed along and offered so much encouragement for this project.
submitted by fatalglory to nanocurrency [link] [comments]

Seasoned Rust programmers: what patterns, idioms, conventions would you impress upon Rust newcomers?

Hey guys,
This post has some background and a broad question, I hope that's okay.
I've been building the core API server for a side project of mine in Rust. It's a HTTP service, more on that later. This is my first time _really_ using Rust (I come from the typeless Wild West that is Nodejs land), and getting started was a bit rough, arguing with the compiler, not understanding what was going on as I tried to piece together different code snippets I liked from across forums, example repos, and what I was reading in "the book".
As I started to understand things a little more, I got more comfortable with the language. Getting hip to rust-analyzer really helped me a lot; I've started to feel less like I'm arguing w the compiler and more like I'm having a conversation with it (I read that in the comments on this sub somewhere but it's also how I feel). I am no longer panicking (in an emotional sense, not a computer sense) as I set out on a new task in the codebase, I'm still learning a lot but I have a degree of confidence and familiarity now. I am very proud of my Warp+sqlx server.
As I got familiar, I started wondering what conventions or patterns I might be missing. I feel that I am missing some stuff. I am curious: from seasoned Rust programmers to newcomers like myself, what patterns, idioms, conventions do you often see missing?
I apologize for the broad question. To help, I do have a specific example of the kind of thing I think I might be missing or am unaware of:
I have models (for sqlx and requests) defined as structs in /models/modelname.rs. When I need them, I pull them into my handlers (in the peer directory /handlers). I notice a lot of libraries I use have `.builder()` functions impl'd for their types; it also looks like `.new()` is a common method (if not convention?). But this is not how I initiate my structs. In the handler, I pull in the struct and initiate it using `let my_struct = MyStruct { ..fields.. }`. Is this pattern elementary? Or would you consider it fine for a closed-source binary package (ie it's not a library going out to the community, so patterns be damned just get it initiated).
If I may, I would love to give a shouts-out to the folks working on sqlx. Great library akin to how I like to deal with database queries (not an ORM guy), but also what a welcoming, healthy, helpful community. I've also really enjoyed using Warp. Once I feel more comfortable, I hope to start contributing to the Rust community.
Update: I just want to say thank you so much to everyone for all the feedback. I’ve implemented methods for my structs - both new() and set_field - and that already makes me feel so much cleaner about my code. I’m also about to really dig into Option and Result types, so my API can provide actionable error messages when a request fails. This said - I am eager to also hear any other patterns or idioms that seem to be missing from novices, even if not called into concern from the background I provided. Again, thank you!!
submitted by mzl0 to rust [link] [comments]

Wine 5.9 Released

The Wine development release 5.9 is now available.
 
https://www.winehq.org/announce/5.9 
 
What's new in this release (see below for details):
 
- Major progress on the WineD3D Vulkan backend. - Initial support for splitting dlls into PE and Unix parts. - Support for generating PDB files when building PE dlls. - Timestamp updates in the Kernel User Shared Data. - Various bug fixes. 
 
The source is available from the following locations:
http://dl.winehq.org/wine/source/5.x/wine-5.9.tar.xz http://mirrors.ibiblio.org/wine/source/5.x/wine-5.9.tar.xz 
 
Binary packages for various distributions will be available from:
http://www.winehq.org/download 
 
You will find documentation on
http://www.winehq.org/documentation 
 
You can also get the current source directly from the git repository.
Check
http://www.winehq.org/git for details. 
 
Wine is available thanks to the work of many people.
See the file AUTHORS in the distribution for the complete list.
 
 
Bugs fixed in 5.9 (total 28):
 
15489 Build should optionally produce .pdb file suitable for use with symbol server 29168 Multiple games and applications need realtime updates to KSYSTEM_TIME members in KUSER_SHARED_DATA (Star Wars: The Old Republic game client, Blizzard games, GO 1.4+ runtime, Denuvo Anti-Tamper x64 #2) 29806 Hype The Time Quest: DirectX Media (DXM) v6.0 runtime installer fails (advpack.ExecuteCab should extract the INF from CAB before running the install part) 30814 Age of Empires II scrolling gets stuck after Alt-Tab away and back 42125 4k/8k demos often fail with 'Bad EXE Format' or 'error c0000020' due to Crinkler executable file compressor's "optimized" usage of PE header fields (loader compatibility) 43959 webservices/reader tests fail on arm 43960 rpcrt4/cstub tests fail on arm 43962 msvcrt/string tests fail on arm 44860 4k/8k demos crash due to Crinkler executable file compressor expecting PEB address in %ebx on process entry 48186 every wine process shows a definite leak in dlls/ntdll/env.c 48289 Grand Theft Auto 5 crashes after loading (GTA5 expects Vista+ PEB_LDR_DATA structure fields) 48441 mouse coordinates cannot exceed initial desktop size during startup of wineserver 48471 Mismatching behavior of GetEnvironmentVariableW for empty / long values 48490 Restored minimized windows have wrong height 48775 Microsoft Teams 1.3.x crashes on unimplemented function IPHLPAPI.DLL.NotifyRouteChange2 49105 Deus Ex GOTY fails to start with Direct3D renderer 49115 Hitman (2016) and Hitman 2 (2018) fail to launch in DX11 mode 49128 Good Company crash on launch 49130 NVIDIA RTX Voice installer crashes on unimplemented function setupapi.dll.SetupDiGetActualSectionToInstallExW 49131 wineboot fails to start 49139 Regression: Wine crashes on startup on FreeBSD >= 5.7 49140 Windows 10 SDK installer hangs on startup 49142 Horizontal mouse scroll events (X11 buttons 6 and 7) should not be translated to back/forward events 49146 Hearts of Iron IV needs api-ms-win-crt-private-l1-1- 0.dll._o_sin 49173 widl generates invalid code for Gecko's ISimpleDOM.idl 49175 Duplicated checking canonicalized inside kernelbase/path.c 49200 Steam hangs after login 49203 Possible incorrect usage >= instead <= in shlview.c 
submitted by catulirdit to linux_gaming [link] [comments]

C++ Best Practices For a C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?


... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:

/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?


To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?


If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?


Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?


One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp_questions [link] [comments]

A fix for Arkham Knight's stuttering / framerate issues via D3D11 hooking

I've been suffering from stuttering issues with Arkham Knight ever since it came out, and I finally broke and decide to investigate _why_ it's that slow - TL;DR turns out the streaming system keeps trying to create thousands of new textures instead of recycling them, among other issues.
I grabbed the source for ReShade (to use the API hooking / interception) and implemented a texture pool on top of D3D11, so the game remains unaware it exists yet gets the performance benefits from the texture reuse. It seemed to fix the issue for me and now I get a mostly locked 60 FPS game, so here's hoping it works for other people.
[Update: Want to do your own perf investigation? Check out the WIP follow-up blog post - Arkham Quixote: Detective Mode]
The source code is available at https://code.sherief.fyi/sherief/arkham-fixesrc/branch/batman and the binary / DLL itself can be downloaded from https://sherief.fyi/arkham-knight/dxgi.dll (AMD users: see notes below) and used just like ReShade by dropping it in the same directory as the game's executable. Please keep in mind the following:
If you have any issues with this feel free to reach out to me - no promises, but I'll do my best to get it working on as many systems as possible. For anyone interested in the technical details, there's a write-up: https://sherief.fyi/post/arkham-quixote/
If you encounter FPS drops, collecting a trace helps me investigate the issue. Here's how to do that: Get the alternative tracing dll from https://sherief.fyi/arkham-knight/dxgi_tracy.dll and the tracing tool called Tracy from https://sherief.fyi/arkham-knight/Tracy.exe
Launch Tracy and click connect. Launch the app with the new dxgi_tracy.dll renamed to dxgi.dll and Tracy should collect a bunch of info in real time. Try to limit the trace to one or two minutes, once that's done close the game (ALT+F4 preferred) then click the wifi icon in Tracy's top left corner and save the trace file - share this file with me and I'll be able to look at your run step-by-step and identify the issue better.
submitted by SheriefFarouk to pcgaming [link] [comments]

what is this i just downloaded (youtube code?)

so this is kinda a wierd story. I was planning to restart my computer. (cant remember why) I spend most of my time watching youtube videos so i had alot of tabs open. So i was watching the videos then deleting the tab but not opening new tabs. So i was down 2 i think 1 it was a pretty long video so i tried to open a youtube home page tab just to look while i listened to the video. And this is a short exerp of what i got.





YouTube











submitted by inhuman7773 to techsupport [link] [comments]

Critique my first time using CMake?

Hello all,
I've written up a handful of CMakeLists.txt files in order to facilitate building the project I'll be working on next. In brief, it's an OpenGL application (via GLAD/GLFW on Linux) that doesn't do anything exciting... Yet. =)
I'm building this from within VSCode sometimes. Sometimes from the command line. However, I get pretty different results when I switch between the two. First, as VSCode's CMake Tools plugin defaults to Ninja and CMake itself defaults to make, the binaries come out awfully different in terms of size between the two, even between the like profiles (Release vs Build, for example).
I've also noticed that the CMAKE_GENERATOR variable doesn't appreciate being set. Again, from within VSCode, the -G Ninja flag is passed by default. This also works from the command line. However, if I add set(CMAKE_GENERATOR "Ninja") to the top-level lists file, it fails when attempting to retrieve glad from GitHub.
Anyway... I'm interested in those behaviors, but what I'd most like is a review of my project structure as it relates to CMake. =)
The setup looks like this:
Here is my top-level lists file:
# Use 3.14+ for all of the FetchContent functionality cmake_minimum_required(VERSION 3.14) project(MyApplication VERSION 0.1.0 DESCRIPTION "A cologarment matching outfit generator" LANGUAGES CXX C) set(default_build_type "Release") if(CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME) # don't neglect the find_package(Doxygen) if(Doxygen_FOUND) add_subdirectory(docs) else() message(STATUS "Doxygen not found; not building docs.") endif() include(FetchContent) # download and configure the glad project FetchContent_Declare( glad GIT_REPOSITORY https://github.com/Dav1dde/glad.git ) FetchContent_MakeAvailable(glad) FetchContent_GetProperties(glad) if(NOT glad_POPULATED) FetchContent_Populate(glad) set( GLAD_PROFILE "core" CACHE STRING "OpenGL profile" ) set( GLAD_API "gl=4.6" CACHE STRING "API type/version pairs, " "like \"gl=3.2,gles=\", no version means latest" ) set( GLAD_GENERATOR "c" CACHE STRING "Language to generate the binding for") add_subdirectory( ${glad_SOURCE_DIR} ${glad_BINARY_DIR} ) endif() # download and configure the GLFW project FetchContent_Declare( glfw GIT_REPOSITORY https://github.com/glfw/glfw.git ) FetchContent_MakeAvailable(glfw) FetchContent_GetProperties(glfw) if(NOT glfw_POPULATED) FetchContent_Populate(glfw) set(GLFW_BUILD_DOCS off CACHE BOOL "" FORCE) set(GLFW_BUILD_TESTS off CACHE BOOL "" FORCE) set(GLFW_BUILD_EXAMPLES off CACHE BOOL "" FORCE) add_subdirectory( ${glfw_SOURCE_DIR} ${glfw_BINARY_DIR} ) endif() # download the Dear ImGUI project FetchContent_Declare( imgui GIT_REPOSITORY https://github.com/ocornut/imgui.git ) FetchContent_MakeAvailable(imgui) FetchContent_GetProperties(imgui) if(NOT imgui_POPULATED) FetchContent_Populate(imgui) add_subdirectory( ${imgui_SOURCE_DIR} ) endif() add_subdirectory(src) add_subdirectory(application) endif() 
Here is the application folder's lists file:
# main application add_executable( MyApplication main.cpp ) configure_file( ${PROJECT_SOURCE_DIR}/include/MyApp/version.hpp.in ${PROJECT_BINARY_DIR}/include/MyApp/version.hpp ) #choose modern OpenGL cmake_policy(SET CMP0072 NEW) find_package(OpenGL REQUIRED) # for Dear ImGui find_package(Threads REQUIRED) # for GLFW find_package(X11 REQUIRED) # for GLFW # remaining application config target_compile_options( MyApplication PRIVATE -Werror -Wall -Wextra -Wconversion -Wsign-conversion -pedantic-errors -Wno-write-strings ) target_link_libraries( MyApplication PRIVATE Color # internal Color library imgui # for simple, in-window UI ${X11_LIBRARIES} ${CMAKE_DL_LIBS} ${CMAKE_THREAD_LIBS_INIT} ) target_include_directories( MyApplication PRIVATE ${PROJECT_BINARY_DIR}/include ${OPENGL_INCLUDE_DIR} ${glad_BINARY_DIR}/include ${glfw_SOURCE_DIR}/include ${imgui_SOURCE_DIR} ${imgui_SOURCE_DIR}/examples ) # Dear ImGui library configuration add_library( imgui ${imgui_SOURCE_DIR}/imgui.cpp ${imgui_SOURCE_DIR}/imgui_draw.cpp ${imgui_SOURCE_DIR}/imgui_widgets.cpp ${imgui_SOURCE_DIR}/examples/imgui_impl_glfw.cpp ${imgui_SOURCE_DIR}/examples/imgui_impl_opengl3.cpp ) target_compile_options( imgui PRIVATE -DIMGUI_IMPL_OPENGL_LOADER_GLAD # configure GLAD as the loader ) target_include_directories( imgui PRIVATE ${imgui_SOURCE_DIR} ${imgui_SOURCE_DIR}/examples ${glad_BINARY_DIR}/include # so imgui can find glad ) target_link_libraries( imgui PRIVATE glad glfw ${OPENGL_LIBRARY} ) # shared target properties set_target_properties( MyApplication imgui PROPERTIES CXX_STANDARD 17 CXX_STANDARD_REQUIRED on CXX_EXTENSIONS off LINK_FLAGS_RELEASE -s ) 
And finally, the lists file from src/
# Internal Color library add_library( Color SHARED Color.cpp ${PROJECT_SOURCE_DIR}/include/MyApp/Color.hpp ) target_include_directories( Color PUBLIC ${PROJECT_SOURCE_DIR}/include ) target_compile_options( Color PRIVATE -Werror -Wall -Wextra -Wconversion -Wsign-conversion -pedantic-errors ) set_target_properties( Color PROPERTIES CXX_STANDARD 17 CXX_STANDARD_REQUIRED on CXX_EXTENSIONS off LINK_FLAGS_RELEASE "-s" ) 
There really was no objective here other than to respect the modern CMake paradigm and not have anything terribly redundant or wrong. =)
Any feedback would be appreciated!
submitted by angled_musasabi to cpp_questions [link] [comments]

Microservices: Service-to-service communication

The following excerpt about microservice communication is from the new Microsoft eBook, Architecting Cloud-Native .NET Apps for Azure. The book is freely available for online reading and in a downloadable .PDF format at https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/

Microservice Guidance
When constructing a cloud-native application, you'll want to be sensitive to how back-end services communicate with each other. Ideally, the less inter-service communication, the better. However, avoidance isn't always possible as back-end services often rely on one another to complete an operation.
There are several widely accepted approaches to implementing cross-service communication. The type of communication interaction will often determine the best approach.
Consider the following interaction types:
Microservice systems typically use a combination of these interaction types when executing operations that require cross-service interaction. Let's take a close look at each and how you might implement them.

Queries

Many times, one microservice might need to query another, requiring an immediate response to complete an operation. A shopping basket microservice may need product information and a price to add an item to its basket. There are a number of approaches for implementing query operations.

Request/Response Messaging

One option for implementing this scenario is for the calling back-end microservice to make direct HTTP requests to the microservices it needs to query, shown in Figure 4-8.

Figure 4-8. Direct HTTP communication
While direct HTTP calls between microservices are relatively simple to implement, care should be taken to minimize this practice. To start, these calls are always synchronous and will block the operation until a result is returned or the request times outs. What were once self-contained, independent services, able to evolve independently and deploy frequently, now become coupled to each other. As coupling among microservices increase, their architectural benefits diminish.
Executing an infrequent request that makes a single direct HTTP call to another microservice might be acceptable for some systems. However, high-volume calls that invoke direct HTTP calls to multiple microservices aren't advisable. They can increase latency and negatively impact the performance, scalability, and availability of your system. Even worse, a long series of direct HTTP communication can lead to deep and complex chains of synchronous microservices calls, shown in Figure 4-9:

Figure 4-9. Chaining HTTP queries
You can certainly imagine the risk in the design shown in the previous image. What happens if Step #3 fails? Or Step #8 fails? How do you recover? What if Step #6 is slow because the underlying service is busy? How do you continue? Even if all works correctly, think of the latency this call would incur, which is the sum of the latency of each step.
The large degree of coupling in the previous image suggests the services weren't optimally modeled. It would behoove the team to revisit their design.

Materialized View pattern

A popular option for removing microservice coupling is the Materialized View pattern. With this pattern, a microservice stores its own local, denormalized copy of data that's owned by other services. Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time. The entire operation executes inside a single process. We explore this pattern and other data concerns in Chapter 5.

Service Aggregator Pattern

Another option for eliminating microservice-to-microservice coupling is an Aggregator microservice, shown in purple in Figure 4-10.

Figure 4-10. Aggregator microservice
The pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its logic into a specialized microservice. The purple checkout aggregator microservice in the previous figure orchestrates the workflow for the Checkout operation. It includes calls to several back-end microservices in a sequenced order. Data from the workflow is aggregated and returned to the caller. While it still implements direct HTTP calls, the aggregator microservice reduces direct dependencies among back-end microservices.

Request/Reply Pattern

Another approach for decoupling synchronous HTTP messages is a Request-Reply Pattern, which uses queuing communication. Communication using a queue is always a one-way channel, with a producer sending the message and consumer receiving it. With this pattern, both a request queue and response queue are implemented, shown in Figure 4-11.

Figure 4-11. Request-reply pattern
Here, the message producer creates a query-based message that contains a unique correlation ID and places it into a request queue. The consuming service dequeues the messages, processes it and places the response into the response queue with the same correlation ID. The producer service dequeues the message, matches it with the correlation ID and continues processing. We cover queues in detail in the next section.

Commands

Another type of communication interaction is a command. A microservice may need another microservice to perform an action. The Ordering microservice may need the Shipping microservice to create a shipment for an approved order. In Figure 4-12, one microservice, called a Producer, sends a message to another microservice, the Consumer, commanding it to do something.

Figure 4-12. Command interaction with a queue
Most often, the Producer doesn't require a response and can fire-and-forget the message. If a reply is needed, the Consumer sends a separate message back to Producer on another channel. A command message is best sent asynchronously with a message queue. supported by a lightweight message broker. In the previous diagram, note how a queue separates and decouples both services.
A message queue is an intermediary construct through which a producer and consumer pass a message. Queues implement an asynchronous, point-to-point messaging pattern. The Producer knows where a command needs to be sent and routes appropriately. The queue guarantees that a message is processed by exactly one of the consumer instances that are reading from the channel. In this scenario, either the producer or consumer service can scale out without affecting the other. As well, technologies can be disparate on each side, meaning that we might have a Java microservice calling a Golang microservice.
In chapter 1, we talked about backing services. Backing services are ancillary resources upon which cloud-native systems depend. Message queues are backing services. The Azure cloud supports two types of message queues that your cloud-native systems can consume to implement command messaging: Azure Storage Queues and Azure Service Bus Queues.

Azure Storage Queues

Azure storage queues offer a simple queueing infrastructure that is fast, affordable, and backed by Azure storage accounts.
Azure Storage Queues feature a REST-based queuing mechanism with reliable and persistent messaging. They provide a minimal feature set, but are inexpensive and store millions of messages. Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size.
You can access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. Storage queues can scale out to large numbers of concurrent clients to handle traffic spikes.
That said, there are limitations with the service:
Figure 4-13 shows the hierarchy of an Azure Storage Queue.

Figure 4-13. Storage queue hierarchy
In the previous figure, note how storage queues store their messages in the underlying Azure Storage account.
For developers, Microsoft provides several client and server-side libraries for Storage queue processing. Most major platforms are supported including .NET, Java, JavaScript, Ruby, Python, and Go. Developers should never communicate directly with these libraries. Doing so will tightly couple your microservice code to the Azure Storage Queue service. It's a better practice to insulate the implementation details of the API. Introduce an intermediation layer, or intermediate API, that exposes generic operations and encapsulates the concrete library. This loose coupling enables you to swap out one queuing service for another without having to make changes to the mainline service code.
Azure Storage queues are an economical option to implement command messaging in your cloud-native applications. Especially when a queue size will exceed 80 GB, or a simple feature set is acceptable. You only pay for the storage of the messages; there are no fixed hourly charges.

Azure Service Bus Queues

For more complex messaging requirements, consider Azure Service Bus queues.
Sitting atop a robust message infrastructure, Azure Service Bus supports a brokered messaging model. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue.
The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full support for the AMQP protocol. AMQP is an open-standard across vendors that supports a binary protocol and higher degrees of reliability.
Service Bus provides a rich set of features, including transaction support and a duplicate detection feature. The queue guarantees "at most once delivery" per message. It automatically discards a message that has already been sent. If a producer is in doubt, it can resend the same message, and Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from having to build additional infrastructure plumbing.
Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is handled by a single message broker and stored in a single message store. But, Service Bus Partitioning spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store. A temporary outage of a messaging store doesn't render a partitioned queue unavailable.
Service Bus Sessions provide a way to group-related messages. Imagine a workflow scenario where messages must be processed together and the operation completed at the end. To take advantage, sessions must be explicitly enabled for the queue and each related messaged must contain the same session ID.
However, there are some important caveats: Service Bus queues size is limited to 80 GB, which is much smaller than what's available from store queues. Additionally, Service Bus queues incur a base cost and charge per operation.
Figure 4-14 outlines the high-level architecture of a Service Bus queue.

Figure 4-14. Service Bus queue
In the previous figure, note the point-to-point relationship. Two instances of the same provider are enqueuing messages into a single Service Bus queue. Each message is consumed by only one of three consumer instances on the right. Next, we discuss how to implement messaging where different consumers may all be interested the same message.

Events

Message queuing is an effective way to implement communication where a producer can asynchronously send a consumer a message. However, what happens when many different consumers are interested in the same message? A dedicated message queue for each consumer wouldn't scale well and would become difficult to manage.
To address this scenario, we move to the third type of message interaction, the event. One microservice announces that an action had occurred. Other microservices, if interested, react to the action, or event.
Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the Publish/Subscribe pattern to implement event-based communication.
Figure 4-15 shows a shopping basket microservice publishing an event with two other microservices subscribing to it.

Figure 4-15. Event-Driven messaging
Note the event bus component that sits in the middle of the communication channel. It's a custom class that encapsulates the message broker and decouples it from the underlying application. The ordering and inventory microservices independently operate the event with no knowledge of each other, nor the shopping basket microservice. When the registered event is published to the event bus, they act upon it.
With eventing, we move from queuing technology to topics. A topic is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 4-16 shows a topic architecture.

Figure 4-16. Topic architecture
In the previous figure, publishers send messages to the topic. At the end, subscribers receive messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a set of rules, shown in dark blue boxes. Rules act as a filter that forward specific messages to a subscription. Here, a "GetPrice" event would be sent to the price and logging Subscriptions as the logging subscription has chosen to receive all messages. A "GetInformation" event would be sent to the information and logging subscriptions.
The Azure cloud supports two different topic services: Azure Service Bus Topics and Azure EventGrid.

Azure Service Bus Topics

Sitting on top of the same robust brokered message model of Azure Service Bus queues are Azure Service Bus Topics. A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at runtime without stopping the system or recreating the topic.
Many advanced features from Azure Service Bus queues are also available for topics, including Duplicate Detection and Transaction support. By default, Service Bus topics are handled by a single message broker and stored in a single message store. But, Service Bus Partitioning scales a topic by spreading it across many message brokers and message stores.
Scheduled Message Delivery tags a message with a specific time for processing. The message won't appear in the topic before that time. Message Deferral enables you to defer a retrieval of a message to a later time. Both are commonly used in workflow processing scenarios where operations are processed in a particular order. You can postpone processing of received messages until prior work has been completed.
Service Bus topics are a robust and proven technology for enabling publish/subscribe communication in your cloud-native systems.

Azure Event Grid

While Azure Service Bus is a battle-tested messaging broker with a full set of enterprise features, Azure Event Grid is the new kid on the block.
At first glance, Event Grid may look like just another topic-based messaging system. However, it's different in many ways. Focused on event-driven workloads, it enables real-time event processing, deep Azure integration, and an open-platform - all on serverless infrastructure. It's designed for contemporary cloud-native and serverless applications
As a centralized eventing backplane, or pipe, Event Grid reacts to events inside Azure resources and from your own services.
Event notifications are published to an Event Grid Topic, which, in turn, routes each event to a subscription. Subscribers map to subscriptions and consume the events. Like Service Bus, Event Grid supports a filtered subscriber model where a subscription sets rule for the events it wishes to receive. Event Grid provides fast throughput with a guarantee of 10 million events per second enabling near real-time delivery - far more than what Azure Service Bus can generate.
A sweet spot for Event Grid is its deep integration into the fabric of Azure infrastructure. An Azure resource, such as Cosmos DB, can publish built-in events directly to other interested Azure resources - without the need for custom code. Event Grid can publish events from an Azure Subscription, Resource Group, or Service, giving developers fine-grained control over the lifecycle of cloud resources. However, Event Grid isn't limited to Azure. It's an open platform that can consume custom HTTP events published from applications or third-party services and route events to external subscribers.
When publishing and subscribing to native events from Azure resources, no coding is required. With simple configuration, you can integrate events from one Azure resource to another leveraging built-in plumbing for Topics and Subscriptions. Figure 4-17 shows the anatomy of Event Grid.

Figure 4-17. Event Grid anatomy
A major difference between EventGrid and Service Bus is the underlying message exchange pattern.
Service Bus implements an older style pull model in which the downstream subscriber actively polls the topic subscription for new messages. On the upside, this approach gives the subscriber full control of the pace at which it processes messages. It controls when and how many messages to process at any given time. Unread messages remain in the subscription until processed. A significant shortcoming is the latency between the time the event is generated and the polling operation that pulls that message to the subscriber for processing. Also, the overhead of constant polling for the next event consumes resources and money.
EventGrid, however, is different. It implements a push model in which events are sent to the EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is triggered only when it's needed to consume an event – not continually as with polling. That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads.
Event Grid is a fully managed serverless cloud service. It dynamically scales based on your traffic and charges you only for your actual usage, not pre-purchased capacity. The first 100,000 operations per month are free – operations being defined as event ingress (incoming event notifications), subscription delivery attempts, management calls, and filtering by subject. With 99.99% availability, EventGrid guarantees the delivery of an event within a 24-hour period, with built-in retry functionality for unsuccessful delivery. Undelivered messages can be moved to a "dead-letter" queue for resolution. Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn't support features like ordered messaging, transactions, and sessions.

Streaming messages in the Azure cloud

Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB. But, what if your cloud-native system needs to process a stream of related events? Event streams are more complex. They're typically time-ordered, interrelated, and must be processed as a group.
Azure Event Hub is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and process millions of events per second. Shown in Figure 4-18, it's often a front door for an event pipeline, decoupling ingest stream from event consumption.

Figure 4-18. Azure Event Hub
Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event Hubs keep event data after it's been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis. Events stored in event hub are only deleted upon expiration of the retention period, which is one day by default, but configurable.
Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0. Existing Kafka applications can communicate with Event Hub using the Kafka protocol providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems embrace Kafka.
Event Hubs implements message streaming through a partitioned consumer model in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables tremendous horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. Figure 4-19 shows partitioning in an Event Hub.

Figure 4-19. Event Hub partitioning
Instead of reading from the same resource, each consumer group reads across a subset, or partition, of the message stream.
For cloud-native applications that must stream large numbers of events, Azure Event Hub can be a robust and affordable solution.

About the Author:
Rob Vettor is a Principal Cloud-Native Architect for the Microservice Enterprise Service Group. Reach out to Rob at [[email protected]](mailto:[email protected]) or https://thinkingincloudnative.com/weclome-to-cloud-native/
submitted by robvettor to microservices [link] [comments]

Using Deep Learning to Predict Earnings Outcomes

Using Deep Learning to Predict Earnings Outcomes
(Note: if you were following my earlier posts, I wrote a note at the end of this post explaining why I deleted old posts and what changed)
Edit: Can't reply to comments since my account is still flagged as new :\. Thank you everyone for your comments. Edit: Made another post answering questions here.
  • Test data is untouched during training 10:1:1 train:val:test.
  • Yes, I consider it "deep" learning from what I learned at my institution. I use LSTMs at one point in my pipeline, feel free to consider that deep or not.
  • I'll be making daily posts so that people can follow along.
  • Someone mentioned RL, yes I plan on trying that in the future :). This would require a really clever way to encode the current state parameters. Haven't thought about it too much yet.
  • Someone mentioned how companies beat earnings 61% anyway, so my model must be useless right? Well if you look at the confusion matrix you can see I balanced classes before training (with some noise). This means that the data had roughly 50/50 beat/miss and had a 58% test accuracy.
TLDR:
Not financial advice.
  • I created a deep learning algorithm trained on 2015-2019 data to predict whether a company will beat earning estimates.
  • Algorithm has an accuracy of 58%.
  • I need data and suggestions.
  • I’ll be making daily posts for upcoming earnings.
Greetings everyone,
I’m Bunga, an engineering PhD student at well known university. Like many of you, I developed an interest in trading because of the coronavirus. I lost a lot of money by being greedy and uninformed about how to actually trade options. With all the free time I have with my research slowing down because of the virus, I’ve decided to use what I’m good at (being a nerd, data analytics, and machine learning) to help me make trades.
One thing that stuck out to me was how people make bets on earnings reports. As a practitioner of machine learning, we LOVE binary events since the problem can be reduced to a simple binary classification problem. With that being said, I sought out to develop a machine learning algorithm to predict whether a company will beat earnings estimates.
I strongly suggest TO NOT USE THIS AS FINANCIAL ADVICE. Please, I could just be a random guy on the internet making things up, and I could have bugs in my code. Just follow along for some fun and don’t make any trades based off of this information 😊
Things other people have tried:
A few other projects have tried to do this to some extent [1,2,3], but some are not directly predicting the outcome of the earnings report or have a very small sample size of a few companies.
The data
This has been the most challenging part of the project. I’m using data for 4,000 common stocks.
Open, high, low, close, volume stock data is often free and easy to come by. I use stock data during the quarter (Jan 1 – Mar 31 stock data for Q1 for example) in a time series classifier. I also incorporate “background” data from several ETFs to give the algorithm a feel for how the market is doing overall (hopefully this accounts for bull/bear markets when making predictions).
I use sentiment analyses extracted from 10K/10Q documents from the previous quarter as described in [4]. This gets passed to a multilayer perceptron neural network.
Data that I’ve tried and doesn’t work to well:
Scraping 10K/10Q manually for US GAAP fields like Assets, Cash, StockholdersEquity, etc. Either I’m not very good at processing the data or most of the tables are incomplete, this doesn’t work well. However, I recently came across this amazing API [5] which will ameliorate most of these problems, and I plan on incorporating this data sometime this week.
Results
After training on about 34,000 data points, the model achieves a 58% accuracy on the test data. Class 1 is beat earnings, Class 2 is miss earnings.. Scroll to the bottom for the predictions for today’s AMC estimates.

https://preview.redd.it/fqapvx2z1tv41.png?width=875&format=png&auto=webp&s=05ea5cae25ee5689edea334f2814e1fa73aa195d
Future Directions
Things I’m going to try:
  • Financial twitter sentiment data (need data for this)
  • Data on options (ToS apparently has stuff that you can use)
  • Using data closer to the earnings report itself rather than just the data within the quarterly date
A note to the dozen people who were following me before
Thank you so much for the early feedback and following. I had a bug in my code which was replicating datapoints, causing my accuracy to be way higher in reality. I’ve modified some things to make the network only output a single value, and I’ve done a lot of bug fixing.
Predictions for 4/30/20 AMC:
A value closer to 1 means that the company will be more likely to beat earnings estimates. Closer to 0 means the company will be more likely to miss earnings estimates. (People familiar with machine learning will note that neural networks don’t actually output a probability distribution so the values don’t actually represent a confidence).
  • Tkr: AAPL NN: 0.504
  • Tkr: AMZN NN: 0.544
  • Tkr: UAL NN: 0.438
  • Tkr: GILD NN: 0.532
  • Tkr: TNDM NN: 0.488
  • Tkr: X NN: 0.511
  • Tkr: AMGN NN: 0.642
  • Tkr: WDC NN: 0.540
  • Tkr: WHR NN: 0.574
  • Tkr: SYK NN: 0.557
  • Tkr: ZEN NN: 0.580
  • Tkr: MGM NN: 0.452
  • Tkr: ILMN NN: 0.575
  • Tkr: MOH NN: 0.500
  • Tkr: FND NN: 0.542
  • Tkr: TWOU NN: 0.604
  • Tkr: OSIS NN: 0.487
  • Tkr: CXO NN: 0.470
  • Tkr: BLDR NN: 0.465
  • Tkr: CASA NN: 0.568
  • Tkr: COLM NN: 0.537
  • Tkr: COG NN: 0.547
  • Tkr: SGEN NN: 0.486
  • Tkr: FMBI NN: 0.496
  • Tkr: PSA NN: 0.547
  • Tkr: BZH NN: 0.482
  • Tkr: LOCO NN: 0.575
  • Tkr: DLA NN: 0.460
  • Tkr: SSNC NN: 0.524
  • Tkr: SWN NN: 0.476
  • Tkr: RMD NN: 0.499
  • Tkr: VKTX NN: 0.437
  • Tkr: EXPO NN: 0.526
  • Tkr: BL NN: 0.516
  • Tkr: FTV NN: 0.498
  • Tkr: ASGN NN: 0.593
  • Tkr: KNSL NN: 0.538
  • Tkr: RSG NN: 0.594
  • Tkr: EBS NN: 0.483
  • Tkr: PRAH NN: 0.598
  • Tkr: RRC NN: 0.472
  • Tkr: ICBK NN: 0.514
  • Tkr: LPLA NN: 0.597
  • Tkr: WK NN: 0.630
  • Tkr: ATUS NN: 0.530
  • Tkr: FBHS NN: 0.587
  • Tkr: SWI NN: 0.521
  • Tkr: TRUP NN: 0.570
  • Tkr: AJG NN: 0.509
  • Tkr: BAND NN: 0.618
  • Tkr: DCO NN: 0.514
  • Tkr: BRKS NN: 0.490
  • Tkr: BY NN: 0.502
  • Tkr: CUZ NN: 0.477
  • Tkr: EMN NN: 0.532
  • Tkr: VICI NN: 0.310
  • Tkr: GLPI NN: 0.371
  • Tkr: MTZ NN: 0.514
  • Tkr: SEM NN: 0.405
  • Tkr: SPSC NN: 0.465
[1] https://towardsdatascience.com/forecasting-earning-surprises-with-machine-learning-68b2f2318936
[2] https://zicklin.baruch.cuny.edu/wp-content/uploads/sites/10/2019/12/Improving-Earnings-Predictions-with-Machine-Learning-Hunt-Myers-Myers.pdf
[3] https://www.euclidean.com/better-than-human-forecasts
[4] https://cran.r-project.org/web/packages/edgaedgar.pdf.
[5] https://financialmodelingprep.com/developedocs/
submitted by xXx_Bunga_xXx to wallstreetbets [link] [comments]

Kaput v1.0.0 - Now with more chill!

Kaput v1.0.0 - Now with more chill!

https://preview.redd.it/n4ad3y6kzva51.png?width=1280&format=png&auto=webp&s=3b73f824f8c467013a11e0e4adba0e8f251f1a88

Version 1.0.0 of Kaput-CLI has just been released to NPM.

Here are some of the things I've added in this version:
  • Torrent indexer searching kindly provided by chill.institute. You can now search all major indexers and add torrents to your Put account directly from the terminal!
  • Support for using multiple accounts.
  • Pure JSON output for some commands.
  • Option to list all files using the --all flag.
  • Added filters from the Put API. For example, you can choose to list only audio files.
  • Added debug which outputs the current state of the config file and where it is located on the disk.
  • You can now use environment variables to authenticate.
  • Other bug fixes and improvements.

Installation
  • NPM: npm install -g kaput-cli
  • As before, binaries for most platforms (Linux, Windows, MacOS) are also available on GitHub These do not require Node to be installed.

A lot of these changes were added because of people who reached out about them. If you'd like to see something else added, let me know! GitHub is the best place to make a request.
Thanks for the support, let me know if I've broken anything :)
submitted by dccfoux to putdotio [link] [comments]

List of New Su