Cross Platform Dump and Load - Archive

Binding attributes by name

I’m not quite sure how to approach this, from a best practice perspective..
I’ve discovered today that SPIR-V shaders typically don’t have attribute names, with Vulkan expecting to bind directly by location.
So.. in an OpenGL world where I once might have looked up the attribute by name to know where to bind to... how would I achieve that now?
It seems I need to know ahead of time exactly where to bind to.. is that right?
Is SPIRV-Cross any use here at all? I don’t see how it can divine attribute names if they’re stripped from the binary. I’ve read here that some debug modes will preserve attribute names, but surely debug mode shaders isn’t a desirable solution.
I can think of only a few other ways forward...
A) ensure all my shaders have a common layout (seems particularly suboptimal)
B) hardcode a lookup table of all of the layouts for each shader I plan on loading (seems inflexible)
C) some other metadata based approach to loading shaders, that I can produce at build-time and lookup at runtime
Perhaps option B isn’t so bad, but I’m curious how this is approached in larger scale systems.
submitted by tim-rex to vulkan [link] [comments]

CRTPi-480p v3.0X - An unholy bastard for Pi3 && Pi4!

CRTPi Project Presents:

CRTPi-480p v3.0X && v3.4X

A CRTPi image for running 480p via HDMI or VGA!
Other Releases:
Changelog: v3.0X && v3.4X for HDMI&VGA666 05/25/2020
Required Hardware:
What is this?
I finally did something for people who don't want to use expensive video hats and SDTV's! This image boots in your choise of CEA 480p or various 640x480p VGA modes. Using Snap-Shader and Simple-Bilinear-Scanlines, I've given a way to upscale 240p content to 480p, while still looking and performing console-fresh! The perfect image for EDTV, HiScan sets, mulit-scan monitors, GSB arcade boards -- this even works on HDTV's that don't upscale 480p content!!
That said, I hate it and it should burn in hell... Enjoy!
What Does That Look Like?
Here's a bunch of pics I took, some better than others!
What is Different?
See the current changelog and the v3.0 thread for a complete list.
What is Run-Ahead?
The Run Ahead feature calculates the frames as fast as possible in the background to "rollback" the action as close as possible to the input command requested.
I've enabled run-ahead on most of the 8 & 16-bit consoles and handhelds. A single frame (and using the second instance) is saved here, which dramatically improves input lag without affecting performance on a Pi3B+. More frames would require more hardware power, and may be achievable via overclocking.
lr-snes9x2010 consistent 60.0-60.2 FPS @ 60.098801hz lr-fceumm consistent 60.0-60.2 FPS @ 60.098801hz lr-beetle-pce-fast consistent 60.1-60.2 @ 60.000000hz lr-genesis-gx-plus consistent 59.9-60.2 FPS @ 59.922741hz (both genesis and sega cd) lr-picodrive consistent 59.9-60.2 FPS @ 59.922741hz (master system, game gear, and 32X) lr-gambatte consistent 60.0-60.2 FPS @ 60.098801hz (SGB2 framerate) lr-mgba consistent 59.8-60.4 FPS @ 60.002220hz (Gamecube framerate) 
To disable runahead for a game (or emulator):
Quick Menu > Latency > Run-Ahead to Reduce Latency > OFF 
What is Snap-Shader?
It's a Retroarch GSL shader that ensures games on CRT will look as good as on original hardware. It Makes games crisp vertically, and not shimmer horizontally. It correctly aligns the games for you regardless of console. Virtually eliminates the need for separate configurations per core (console).
https://github.com/ektgit/snap-shader-240p
Snap Shader (especially the snap-basic) is super useful on consoles where you may have a mix of horizontal resolutions within the core that you don't necessarily want to set individual game configs for, which for this build, is basically everything but Megadrive, GBA, GBC, Doom, and Quake.
What Does This NOT Have?
This doesn't have any ROMs (other than freeware test suites), BIOS files, music, screenshots, metadata, or videos concerning copy-written games. Other than the configurations and overlays, it has nothing that can't be downloaded through the repository or freeware.
Where Can I Get It?
You can download a premade image from Google Drive:
NOTE: Please expand your file system via Raspi-Config after your first boot, and reboot!
CRTPi-480p v3.0X: For Pi3B/B+ with HDMI or VGA666 LIVE @ NOW
MD5: 9ad75efe8516ab0e7f2df3b084e93dcd 
CRTPi-480p v3.4X: For Pi4B with HDMI or VGA666 LIVE @ 16:40PST
MD5: 7272a6ac24fa5004a1f6c961264b2d7d 
How do I set this up?
Edit the /boot/config.txt before first boot.
If you're using an HDMI to converter, select one HDMI block of your preference by uncommenting it and commenting out the rest. The default is HDMI CEA 480p.
If you're using a VGA666, uncomment this block, and then one VGA666 block of your choice.
\## VGA666 - DPI Settings \#dtoverlay=vga666 \#enable_dpi_lcd=1 \#display_default_lcd=1 
Default Retroarch Keyboard Hotkeys
*SPACE: Enable Hotkey* F1 Menu F2 FF Toggle F3 Reset F4 Cheat Toggle F5 Save State F6 Load State F7 Change State - F8 Change State + F9 Screenshot F10 Mute ENTER: Exit 
The GBA/GBC/GB overlay is cropped on my !!!
Go into the Retroarch menu in game and navigate to "Quick Menu > On-Screen Overlay". Click "Overlay Preset" and choose the VGA version instead of the 480p version -- "crt_gbaplayer..." is for GBA and "crt_supergameboy..." is for GB/GBC.
I have X Issue! Help?
Chrono Cross (or Bloody Roar II) or some other PSX game has weird thick-as-fuck scanlines!
Disable the scanline shader, leaving Snap-Basic in place. Chances are you're playing a 480i game that wasn't intended to have scanlines, and the shader can't clamp to the right frames.
I only have like 500mb of free space on my XXgb SD card!
You need to expand your file system via Raspi-Config. Follow these steps.
GBA, PSX, Neo-Geo, Sega-CD, PCE-CD, etc. games don't work!
I haven't included any bios's that didn't come with the retropie stock image, so you'll need to install the appropriate files in the BIOS folder. For Neo-Geo, I highly recommend the UniBios (renamed to neogeo.zip).
Samba Share won't work after I set up Wi-Fi!
Samba share service starts on boot, pending that a network is available. Configure your Wi-Fi then reboot first, and if that doesn't fix it then go into Retropie Setup > Configuration/Tools > Samba > Install Samba. Once it's complete, reboot and it should be golden.
USB-Romservice and/or Retropie-Mount don't work!
Follow this guide, but follow these steps before plugging in your thumb drive:
  • Go to Retropie-Setup
  • Update retropie install script
  • Go to Manage Packages -> Optional Packages
  • Scroll all the way down to usbromservice
  • Uninstall usbromservice
  • Install it again from Binary
  • Once finished, choose Configuration, then Enable USB Romservice
  • Reboot, and wait for it to fully boot in to ES
  • Plug in USB stick (has to be FAT32) and WAIT A LONG TIME (if your stick has a light, wait for it to stop flashing)
submitted by ErantyInt to u/ErantyInt [link] [comments]

CRTPi-VGA v3.0V - Find that VGA Monitor Yet?!

CRTPi Project Presents:

CRTPi-VGA v3.0V

A CRTPi image for running 240p on VGA CRT monitors
Other Releases:
Changelog: v3.0V for VGA-666 05/12/2020
Changelog: v2.5V for VGA-666 05/05/2020
Changelog: v2.0VX for VGA-666 03/21/2020
Required Hardware:
What is this?
Since I've been relegated to working from home for the next forever, I needed something to pass the time. Lots of users have asked for, and worked with me to create a solution for what we'll call the "Poor Man's BVM." A $5 Gert VGA666 adapter, cheap/free 31khz VGA Monitor, and a Pi packed with roms. What could be a better way to pass the quarantine?
For a long time, there were several stumbling blocks:
I finally stumbled upon some old threads with people listing out some 640x480 hdmi_timings, and that cracked the whole case wide open. I finally had the missing piece that could be slotted into my existing images. The end result is Emulationstation and other non-libretro emulators launching in 640x480p @ 65hz (great for PSP, DOSbox, ScummVM, and Kodi!) and all Retroarch emulators launching in 2048x240p or 1920x240p @ 120hz.
I opted to steer away from Black Frame Insertion and instead change the VSync Swap interval to 2 (running the framerate at half of 120hz). This solves the intermittent flicker and also the reduced gamma from BFI. Overall, it's a much more pleasing experience IMO. You can always change VSync Interval back to 1, and enable BFI in Retroarch if you the other way is better.
What Does That Look Like?
Here's a bunch of pics I took, some better than others!
What is Different?
See the current changelog and the v3.0 thread for a complete list.
What is Run-Ahead?
The Run Ahead feature calculates the frames as fast as possible in the background to "rollback" the action as close as possible to the input command requested.
I've enabled run-ahead on most of the 8 & 16-bit consoles and handhelds. A single frame (and using the second instance) is saved here, which dramatically improves input lag without affecting performance on a Pi3B+. More frames would require more hardware power, and may be achievable via overclocking.
lr-snes9x2010 consistent 60.0-60.2 FPS @ 60.098801hz lr-fceumm consistent 60.0-60.2 FPS @ 60.098801hz lr-beetle-pce-fast consistent 60.1-60.2 @ 60.000000hz lr-genesis-gx-plus consistent 59.9-60.2 FPS @ 59.922741hz (both genesis and sega cd) lr-picodrive consistent 59.9-60.2 FPS @ 59.922741hz (master system, game gear, and 32X) lr-gambatte consistent 60.0-60.2 FPS @ 60.098801hz (SGB2 framerate) lr-mgba consistent 59.8-60.4 FPS @ 60.002220hz (Gamecube framerate) 
To disable runahead for a game (or emulator):
Quick Menu > Latency > Run-Ahead to Reduce Latency > OFF 
What is Snap-Shader?
It's a Retroarch GSL shader that ensures games on CRT will look as good as on original hardware. It Makes games crisp vertically, and not shimmer horizontally. It correctly aligns the games for you regardless of console. Virtually eliminates the need for separate configurations per core (console).
https://github.com/ektgit/snap-shader-240p
Snap Shader (especially the snap-basic) is super useful on consoles where you may have a mix of horizontal resolutions within the core that you don't necessarily want to set individual game configs for. This is especially useful in PSX, FDS, PCE/PCE-CD, 32X, and MAME.
So far, the image is only set up for Snap-Basic (Pass: 1, Filter: Nearest, Scale: Don't Care) on lr-PCSX-ReARMed. If you care to, I would definitely try it out on other emulators. Here's the enable process:
  • Quick Menu > Shaders
  • Video Shaders > On
  • Shader Passes > 1
  • Shader #0 > snap-basic.glsl
  • Shader #0 Filter > Nearest
  • Shader #0 Scale > Don't Care
  • Save > Save Core Preset
What Does This NOT Have?
This doesn't have any ROMs (other than freeware test suites), BIOS files, music, screenshots, metadata, or videos concerning copy-written games. Other than the configurations and overlays, it has nothing that can't be downloaded through the repository or freeware.
Where Can I Get It?
You can download a premade image from Google Drive:
NOTE: Please expand your file system via Raspi-Config after your first boot, and reboot!
CRTPi-VGA v3.0V: For Pi3B/B+ with VGA666
MD5: 828cf4e5b67f67e8b5bd1e4fb8477332 
Default Retroarch Keyboard Hotkeys
*SPACE: Enable Hotkey* F1 Menu F2 FF Toggle F3 Reset F4 Cheat Toggle F5 Save State F6 Load State F7 Change State - F8 Change State + F9 Screenshot F10 Mute ENTER: Exit 
I have X Issue! Help?
I only have like 500mb of free space on my XXgb SD card!
You need to expand your file system via Raspi-Config. Follow these steps.
GBA, PSX, Neo-Geo, Sega-CD, PCE-CD, etc. games don't work!
I haven't included any bios's that didn't come with the retropie stock image, so you'll need to install the appropriate files in the BIOS folder. For Neo-Geo, I highly recommend the UniBios (renamed to neogeo.zip).
Samba Share won't work after I set up Wi-Fi!
Samba share service starts on boot, pending that a network is available. Configure your Wi-Fi then reboot first, and if that doesn't fix it then go into Retropie Setup > Configuration/Tools > Samba > Install Samba. Once it's complete, reboot and it should be golden.
USB-Romservice and/or Retropie-Mount don't work!
Follow this guide, but follow these steps before plugging in your thumb drive:
  • Go to Retropie-Setup
  • Update retropie install script
  • Go to Manage Packages -> Optional Packages
  • Scroll all the way down to usbromservice
  • Uninstall usbromservice
  • Install it again from Binary
  • Once finished, choose Configuration, then Enable USB Romservice
  • Reboot, and wait for it to fully boot in to ES
  • Plug in USB stick (has to be FAT32) and WAIT A LONG TIME (if your stick has a light, wait for it to stop flashing)
Timings for Boot and Runcommand
640 x 480p @ 65hz Timings: Emulationstation, DOSBox, ScummVM, etc.
640 1 56 56 80 480 0 1 3 25 0 0 0 65 0 36000000 1 #640x480 VGA666 
1280 x 720p @ 60hz Timings: Kodi
1280 1 80 72 216 720 1 5 3 22 0 0 0 60 0 74239049 1 #1280x720p 
Integer Scale Super-Resolution 240p @ 120hz Timings: All Retroarch Emulators
2048 1 180 202 300 240 1 3 5 14 0 0 0 120 0 85909090 1 #256x240/224p 1920 1 167 247 265 240 1 3 7 12 0 0 0 120 0 81720000‬ 1 #320x240/224p 1600 1 95 157 182 240 1 4 3 15 0 0 0 120 0 64000000‬ 1 #320x240/224p Alternate 
Integer Scale Super-Resolution 480p @ 60hz Timings: Dreamcast and PSP Retroarch Emulators
2048 1 180 202 300 480 1 6 10 28 0 0 0 60 0 85909090 1 #320/256x480/448p 
submitted by ErantyInt to u/ErantyInt [link] [comments]

yarn start fails after error in installing date-fns module in react-native app.

For date formatting I tried downloading date-fns through:
npm install date-fns --save 
Installation failed and I got following warnings and errors:
npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142 npm WARN deprecated [email protected]: [email protected]<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of [email protected] npm WARN deprecated [email protected]: 8.1.1 mistakenly contains the contents of 8.2.1; use that version instead npm WARN deprecated [email protected]: [email protected]<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of [email protected] npm WARN deprecated [email protected]: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2. npm WARN deprecated [email protected]: This package has been deprecated, please see migration guide at 'https://github.com/formatjs/formatjs/tree/mastepackages/intl-relativeformat#migration-guide' npm WARN deprecated u/hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated u/hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated [email protected]: Check out \lodash.merge` or `merge-options` instead.` npm WARN rm not removing C:\Users\kanch\Documents\ReactNative\confusion\node_modules\.bin\rimraf.cmd as it wasn't installed by C:\Users\kanch\Documents\ReactNative\confusion\node_modules\rimraf npm WARN rm not removing C:\Users\kanch\Documents\ReactNative\confusion\node_modules\.bin\rimraf as it wasn't installed by C:\Users\kanch\Documents\ReactNative\confusion\node_modules\rimraf npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules\jest-haste-map\node_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^1.2.7 (node_modules\metro\node_modules\jest-haste-map\node_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^1.2.7 (node_modules\metro-core\node_modules\jest-haste-map\node_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^1.2.7 (node_modules\metro\node_modules\jest-haste-map\node_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"}) npm WARN [email protected] requires a peer of [email protected]>6.6.0 but none is installed. You must install peer dependencies yourself. npm ERR! Maximum call stack size exceeded 
After that I left it there and saved the application without formatting the date and tried running yarn start , to which I get this output:
yarn run v1.22.4 warning ..\..\..\package.json: No license field $ expo start internal/modules/cjs/loader.js:1032 throw err; ^ Error: Cannot find module 'nice-try' Require stack: - C:\Users\kanch\Documents\ReactNative\confusion\node_modules\expo\node_modules\cross-spawn\lib\parse.js - C:\Users\kanch\Documents\ReactNative\confusion\node_modules\expo\node_modules\cross-spawn\index.js - C:\Users\kanch\Documents\ReactNative\confusion\node_modules\expo\bin\cli.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js:1029:15) at Function.Module._load (internal/modules/cjs/loader.js:898:27) at Module.require (internal/modules/cjs/loader.js:1089:19) at require (internal/modules/cjs/helpers.js:73:18) at Object. (C:\Users\kanch\Documents\ReactNative\confusion\node_modules\expo\node_modules\cross-spawn\lib\parse.js:4:17) at Module._compile (internal/modules/cjs/loader.js:1200:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1220:10) at Module.load (internal/modules/cjs/loader.js:1049:32) at Function.Module._load (internal/modules/cjs/loader.js:937:14) at Module.require (internal/modules/cjs/loader.js:1089:19) { code: 'MODULE_NOT_FOUND', requireStack: [ 'C:\\Users\\kanch\\Documents\\ReactNative\\confusion\\node_modules\\expo\\node_modules\\cross-spawn\\lib\\parse.js', 'C:\\Users\\kanch\\Documents\\ReactNative\\confusion\\node_modules\\expo\\node_modules\\cross-spawn\\index.js', 'C:\\Users\\kanch\\Documents\\ReactNative\\confusion\\node_modules\\expo\\bin\\cli.js' ] } error Command failed with exit code 1. 
I am unable to start Metro server and can't find nice-try in my node_modules. Any solutions to this?
submitted by dancingfroggie to reactnative [link] [comments]

Microservices: Service-to-service communication

The following excerpt about microservice communication is from the new Microsoft eBook, Architecting Cloud-Native .NET Apps for Azure. The book is freely available for online reading and in a downloadable .PDF format at https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/

Microservice Guidance
When constructing a cloud-native application, you'll want to be sensitive to how back-end services communicate with each other. Ideally, the less inter-service communication, the better. However, avoidance isn't always possible as back-end services often rely on one another to complete an operation.
There are several widely accepted approaches to implementing cross-service communication. The type of communication interaction will often determine the best approach.
Consider the following interaction types:
Microservice systems typically use a combination of these interaction types when executing operations that require cross-service interaction. Let's take a close look at each and how you might implement them.

Queries

Many times, one microservice might need to query another, requiring an immediate response to complete an operation. A shopping basket microservice may need product information and a price to add an item to its basket. There are a number of approaches for implementing query operations.

Request/Response Messaging

One option for implementing this scenario is for the calling back-end microservice to make direct HTTP requests to the microservices it needs to query, shown in Figure 4-8.

Figure 4-8. Direct HTTP communication
While direct HTTP calls between microservices are relatively simple to implement, care should be taken to minimize this practice. To start, these calls are always synchronous and will block the operation until a result is returned or the request times outs. What were once self-contained, independent services, able to evolve independently and deploy frequently, now become coupled to each other. As coupling among microservices increase, their architectural benefits diminish.
Executing an infrequent request that makes a single direct HTTP call to another microservice might be acceptable for some systems. However, high-volume calls that invoke direct HTTP calls to multiple microservices aren't advisable. They can increase latency and negatively impact the performance, scalability, and availability of your system. Even worse, a long series of direct HTTP communication can lead to deep and complex chains of synchronous microservices calls, shown in Figure 4-9:

Figure 4-9. Chaining HTTP queries
You can certainly imagine the risk in the design shown in the previous image. What happens if Step #3 fails? Or Step #8 fails? How do you recover? What if Step #6 is slow because the underlying service is busy? How do you continue? Even if all works correctly, think of the latency this call would incur, which is the sum of the latency of each step.
The large degree of coupling in the previous image suggests the services weren't optimally modeled. It would behoove the team to revisit their design.

Materialized View pattern

A popular option for removing microservice coupling is the Materialized View pattern. With this pattern, a microservice stores its own local, denormalized copy of data that's owned by other services. Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time. The entire operation executes inside a single process. We explore this pattern and other data concerns in Chapter 5.

Service Aggregator Pattern

Another option for eliminating microservice-to-microservice coupling is an Aggregator microservice, shown in purple in Figure 4-10.

Figure 4-10. Aggregator microservice
The pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its logic into a specialized microservice. The purple checkout aggregator microservice in the previous figure orchestrates the workflow for the Checkout operation. It includes calls to several back-end microservices in a sequenced order. Data from the workflow is aggregated and returned to the caller. While it still implements direct HTTP calls, the aggregator microservice reduces direct dependencies among back-end microservices.

Request/Reply Pattern

Another approach for decoupling synchronous HTTP messages is a Request-Reply Pattern, which uses queuing communication. Communication using a queue is always a one-way channel, with a producer sending the message and consumer receiving it. With this pattern, both a request queue and response queue are implemented, shown in Figure 4-11.

Figure 4-11. Request-reply pattern
Here, the message producer creates a query-based message that contains a unique correlation ID and places it into a request queue. The consuming service dequeues the messages, processes it and places the response into the response queue with the same correlation ID. The producer service dequeues the message, matches it with the correlation ID and continues processing. We cover queues in detail in the next section.

Commands

Another type of communication interaction is a command. A microservice may need another microservice to perform an action. The Ordering microservice may need the Shipping microservice to create a shipment for an approved order. In Figure 4-12, one microservice, called a Producer, sends a message to another microservice, the Consumer, commanding it to do something.

Figure 4-12. Command interaction with a queue
Most often, the Producer doesn't require a response and can fire-and-forget the message. If a reply is needed, the Consumer sends a separate message back to Producer on another channel. A command message is best sent asynchronously with a message queue. supported by a lightweight message broker. In the previous diagram, note how a queue separates and decouples both services.
A message queue is an intermediary construct through which a producer and consumer pass a message. Queues implement an asynchronous, point-to-point messaging pattern. The Producer knows where a command needs to be sent and routes appropriately. The queue guarantees that a message is processed by exactly one of the consumer instances that are reading from the channel. In this scenario, either the producer or consumer service can scale out without affecting the other. As well, technologies can be disparate on each side, meaning that we might have a Java microservice calling a Golang microservice.
In chapter 1, we talked about backing services. Backing services are ancillary resources upon which cloud-native systems depend. Message queues are backing services. The Azure cloud supports two types of message queues that your cloud-native systems can consume to implement command messaging: Azure Storage Queues and Azure Service Bus Queues.

Azure Storage Queues

Azure storage queues offer a simple queueing infrastructure that is fast, affordable, and backed by Azure storage accounts.
Azure Storage Queues feature a REST-based queuing mechanism with reliable and persistent messaging. They provide a minimal feature set, but are inexpensive and store millions of messages. Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size.
You can access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. Storage queues can scale out to large numbers of concurrent clients to handle traffic spikes.
That said, there are limitations with the service:
Figure 4-13 shows the hierarchy of an Azure Storage Queue.

Figure 4-13. Storage queue hierarchy
In the previous figure, note how storage queues store their messages in the underlying Azure Storage account.
For developers, Microsoft provides several client and server-side libraries for Storage queue processing. Most major platforms are supported including .NET, Java, JavaScript, Ruby, Python, and Go. Developers should never communicate directly with these libraries. Doing so will tightly couple your microservice code to the Azure Storage Queue service. It's a better practice to insulate the implementation details of the API. Introduce an intermediation layer, or intermediate API, that exposes generic operations and encapsulates the concrete library. This loose coupling enables you to swap out one queuing service for another without having to make changes to the mainline service code.
Azure Storage queues are an economical option to implement command messaging in your cloud-native applications. Especially when a queue size will exceed 80 GB, or a simple feature set is acceptable. You only pay for the storage of the messages; there are no fixed hourly charges.

Azure Service Bus Queues

For more complex messaging requirements, consider Azure Service Bus queues.
Sitting atop a robust message infrastructure, Azure Service Bus supports a brokered messaging model. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue.
The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full support for the AMQP protocol. AMQP is an open-standard across vendors that supports a binary protocol and higher degrees of reliability.
Service Bus provides a rich set of features, including transaction support and a duplicate detection feature. The queue guarantees "at most once delivery" per message. It automatically discards a message that has already been sent. If a producer is in doubt, it can resend the same message, and Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from having to build additional infrastructure plumbing.
Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is handled by a single message broker and stored in a single message store. But, Service Bus Partitioning spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store. A temporary outage of a messaging store doesn't render a partitioned queue unavailable.
Service Bus Sessions provide a way to group-related messages. Imagine a workflow scenario where messages must be processed together and the operation completed at the end. To take advantage, sessions must be explicitly enabled for the queue and each related messaged must contain the same session ID.
However, there are some important caveats: Service Bus queues size is limited to 80 GB, which is much smaller than what's available from store queues. Additionally, Service Bus queues incur a base cost and charge per operation.
Figure 4-14 outlines the high-level architecture of a Service Bus queue.

Figure 4-14. Service Bus queue
In the previous figure, note the point-to-point relationship. Two instances of the same provider are enqueuing messages into a single Service Bus queue. Each message is consumed by only one of three consumer instances on the right. Next, we discuss how to implement messaging where different consumers may all be interested the same message.

Events

Message queuing is an effective way to implement communication where a producer can asynchronously send a consumer a message. However, what happens when many different consumers are interested in the same message? A dedicated message queue for each consumer wouldn't scale well and would become difficult to manage.
To address this scenario, we move to the third type of message interaction, the event. One microservice announces that an action had occurred. Other microservices, if interested, react to the action, or event.
Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the Publish/Subscribe pattern to implement event-based communication.
Figure 4-15 shows a shopping basket microservice publishing an event with two other microservices subscribing to it.

Figure 4-15. Event-Driven messaging
Note the event bus component that sits in the middle of the communication channel. It's a custom class that encapsulates the message broker and decouples it from the underlying application. The ordering and inventory microservices independently operate the event with no knowledge of each other, nor the shopping basket microservice. When the registered event is published to the event bus, they act upon it.
With eventing, we move from queuing technology to topics. A topic is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 4-16 shows a topic architecture.

Figure 4-16. Topic architecture
In the previous figure, publishers send messages to the topic. At the end, subscribers receive messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a set of rules, shown in dark blue boxes. Rules act as a filter that forward specific messages to a subscription. Here, a "GetPrice" event would be sent to the price and logging Subscriptions as the logging subscription has chosen to receive all messages. A "GetInformation" event would be sent to the information and logging subscriptions.
The Azure cloud supports two different topic services: Azure Service Bus Topics and Azure EventGrid.

Azure Service Bus Topics

Sitting on top of the same robust brokered message model of Azure Service Bus queues are Azure Service Bus Topics. A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at runtime without stopping the system or recreating the topic.
Many advanced features from Azure Service Bus queues are also available for topics, including Duplicate Detection and Transaction support. By default, Service Bus topics are handled by a single message broker and stored in a single message store. But, Service Bus Partitioning scales a topic by spreading it across many message brokers and message stores.
Scheduled Message Delivery tags a message with a specific time for processing. The message won't appear in the topic before that time. Message Deferral enables you to defer a retrieval of a message to a later time. Both are commonly used in workflow processing scenarios where operations are processed in a particular order. You can postpone processing of received messages until prior work has been completed.
Service Bus topics are a robust and proven technology for enabling publish/subscribe communication in your cloud-native systems.

Azure Event Grid

While Azure Service Bus is a battle-tested messaging broker with a full set of enterprise features, Azure Event Grid is the new kid on the block.
At first glance, Event Grid may look like just another topic-based messaging system. However, it's different in many ways. Focused on event-driven workloads, it enables real-time event processing, deep Azure integration, and an open-platform - all on serverless infrastructure. It's designed for contemporary cloud-native and serverless applications
As a centralized eventing backplane, or pipe, Event Grid reacts to events inside Azure resources and from your own services.
Event notifications are published to an Event Grid Topic, which, in turn, routes each event to a subscription. Subscribers map to subscriptions and consume the events. Like Service Bus, Event Grid supports a filtered subscriber model where a subscription sets rule for the events it wishes to receive. Event Grid provides fast throughput with a guarantee of 10 million events per second enabling near real-time delivery - far more than what Azure Service Bus can generate.
A sweet spot for Event Grid is its deep integration into the fabric of Azure infrastructure. An Azure resource, such as Cosmos DB, can publish built-in events directly to other interested Azure resources - without the need for custom code. Event Grid can publish events from an Azure Subscription, Resource Group, or Service, giving developers fine-grained control over the lifecycle of cloud resources. However, Event Grid isn't limited to Azure. It's an open platform that can consume custom HTTP events published from applications or third-party services and route events to external subscribers.
When publishing and subscribing to native events from Azure resources, no coding is required. With simple configuration, you can integrate events from one Azure resource to another leveraging built-in plumbing for Topics and Subscriptions. Figure 4-17 shows the anatomy of Event Grid.

Figure 4-17. Event Grid anatomy
A major difference between EventGrid and Service Bus is the underlying message exchange pattern.
Service Bus implements an older style pull model in which the downstream subscriber actively polls the topic subscription for new messages. On the upside, this approach gives the subscriber full control of the pace at which it processes messages. It controls when and how many messages to process at any given time. Unread messages remain in the subscription until processed. A significant shortcoming is the latency between the time the event is generated and the polling operation that pulls that message to the subscriber for processing. Also, the overhead of constant polling for the next event consumes resources and money.
EventGrid, however, is different. It implements a push model in which events are sent to the EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is triggered only when it's needed to consume an event – not continually as with polling. That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads.
Event Grid is a fully managed serverless cloud service. It dynamically scales based on your traffic and charges you only for your actual usage, not pre-purchased capacity. The first 100,000 operations per month are free – operations being defined as event ingress (incoming event notifications), subscription delivery attempts, management calls, and filtering by subject. With 99.99% availability, EventGrid guarantees the delivery of an event within a 24-hour period, with built-in retry functionality for unsuccessful delivery. Undelivered messages can be moved to a "dead-letter" queue for resolution. Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn't support features like ordered messaging, transactions, and sessions.

Streaming messages in the Azure cloud

Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB. But, what if your cloud-native system needs to process a stream of related events? Event streams are more complex. They're typically time-ordered, interrelated, and must be processed as a group.
Azure Event Hub is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and process millions of events per second. Shown in Figure 4-18, it's often a front door for an event pipeline, decoupling ingest stream from event consumption.

Figure 4-18. Azure Event Hub
Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event Hubs keep event data after it's been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis. Events stored in event hub are only deleted upon expiration of the retention period, which is one day by default, but configurable.
Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0. Existing Kafka applications can communicate with Event Hub using the Kafka protocol providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems embrace Kafka.
Event Hubs implements message streaming through a partitioned consumer model in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables tremendous horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. Figure 4-19 shows partitioning in an Event Hub.

Figure 4-19. Event Hub partitioning
Instead of reading from the same resource, each consumer group reads across a subset, or partition, of the message stream.
For cloud-native applications that must stream large numbers of events, Azure Event Hub can be a robust and affordable solution.

About the Author:
Rob Vettor is a Principal Cloud-Native Architect for the Microservice Enterprise Service Group. Reach out to Rob at [[email protected]](mailto:[email protected]) or https://thinkingincloudnative.com/weclome-to-cloud-native/
submitted by robvettor to microservices [link] [comments]

CRTPi-480p v3.0X - An unholy bastard for Pi3 && Pi4!

CRTPi Project Presents:

CRTPi-480p v3.0X && v3.4X

A CRTPi image for running 480p via HDMI or VGA!
Other Releases:
Changelog: v3.0X && v3.4X for HDMI&VGA666 05/25/2020
Required Hardware:
What is this?
I finally did something for people who don't want to use expensive video hats and SDTV's! This image boots in your choise of CEA 480p or various 640x480p VGA modes. Using Snap-Shader and Simple-Bilinear-Scanlines, I've given a way to upscale 240p content to 480p, while still looking and performing console-fresh! The perfect image for EDTV, HiScan sets, mulit-scan monitors, GSB arcade boards -- this even works on HDTV's that don't upscale 480p content!!
That said, I hate it and it should burn in hell... Enjoy!
What Does That Look Like?
Here's a bunch of pics I took, some better than others!
What is Different?
See the current changelog and the v3.0 thread for a complete list.
What is Run-Ahead?
The Run Ahead feature calculates the frames as fast as possible in the background to "rollback" the action as close as possible to the input command requested.
I've enabled run-ahead on most of the 8 & 16-bit consoles and handhelds. A single frame (and using the second instance) is saved here, which dramatically improves input lag without affecting performance on a Pi3B+. More frames would require more hardware power, and may be achievable via overclocking.
lr-snes9x2010 consistent 60.0-60.2 FPS @ 60.098801hz lr-fceumm consistent 60.0-60.2 FPS @ 60.098801hz lr-beetle-pce-fast consistent 60.1-60.2 @ 60.000000hz lr-genesis-gx-plus consistent 59.9-60.2 FPS @ 59.922741hz (both genesis and sega cd) lr-picodrive consistent 59.9-60.2 FPS @ 59.922741hz (master system, game gear, and 32X) lr-gambatte consistent 60.0-60.2 FPS @ 60.098801hz (SGB2 framerate) lr-mgba consistent 59.8-60.4 FPS @ 60.002220hz (Gamecube framerate) 
To disable runahead for a game (or emulator):
Quick Menu > Latency > Run-Ahead to Reduce Latency > OFF 
What is Snap-Shader?
It's a Retroarch GSL shader that ensures games on CRT will look as good as on original hardware. It Makes games crisp vertically, and not shimmer horizontally. It correctly aligns the games for you regardless of console. Virtually eliminates the need for separate configurations per core (console).
https://github.com/ektgit/snap-shader-240p
Snap Shader (especially the snap-basic) is super useful on consoles where you may have a mix of horizontal resolutions within the core that you don't necessarily want to set individual game configs for, which for this build, is basically everything but Megadrive, GBA, GBC, Doom, and Quake.
What Does This NOT Have?
This doesn't have any ROMs (other than freeware test suites), BIOS files, music, screenshots, metadata, or videos concerning copy-written games. Other than the configurations and overlays, it has nothing that can't be downloaded through the repository or freeware.
Where Can I Get It?
You can download a premade image from Google Drive:
NOTE: Please expand your file system via Raspi-Config after your first boot, and reboot!
CRTPi-480p v3.0X: For Pi3B/B+ with HDMI or VGA666 LIVE @ NOW
MD5: 9ad75efe8516ab0e7f2df3b084e93dcd 
CRTPi-480p v3.4X: For Pi4B with HDMI or VGA666 LIVE @ 16:40PST
MD5: 7272a6ac24fa5004a1f6c961264b2d7d 
How do I set this up?
Edit the /boot/config.txt before first boot.
If you're using an HDMI to converter, select one HDMI block of your preference by uncommenting it and commenting out the rest. The default is HDMI CEA 480p.
If you're using a VGA666, uncomment this block, and then one VGA666 block of your choice.
\## VGA666 - DPI Settings \#dtoverlay=vga666 \#enable_dpi_lcd=1 \#display_default_lcd=1 
Default Retroarch Keyboard Hotkeys
*SPACE: Enable Hotkey* F1 Menu F2 FF Toggle F3 Reset F4 Cheat Toggle F5 Save State F6 Load State F7 Change State - F8 Change State + F9 Screenshot F10 Mute ENTER: Exit 
The GBA/GBC/GB overlay is cropped on my !!!
Go into the Retroarch menu in game and navigate to "Quick Menu > On-Screen Overlay". Click "Overlay Preset" and choose the VGA version instead of the 480p version -- "crt_gbaplayer..." is for GBA and "crt_supergameboy..." is for GB/GBC.
I have X Issue! Help?
Chrono Cross (or Bloody Roar II) or some other PSX game has weird thick-as-fuck scanlines!
Disable the scanline shader, leaving Snap-Basic in place. Chances are you're playing a 480i game that wasn't intended to have scanlines, and the shader can't clamp to the right frames.
I only have like 500mb of free space on my XXgb SD card!
You need to expand your file system via Raspi-Config. Follow these steps.
GBA, PSX, Neo-Geo, Sega-CD, PCE-CD, etc. games don't work!
I haven't included any bios's that didn't come with the retropie stock image, so you'll need to install the appropriate files in the BIOS folder. For Neo-Geo, I highly recommend the UniBios (renamed to neogeo.zip).
Samba Share won't work after I set up Wi-Fi!
Samba share service starts on boot, pending that a network is available. Configure your Wi-Fi then reboot first, and if that doesn't fix it then go into Retropie Setup > Configuration/Tools > Samba > Install Samba. Once it's complete, reboot and it should be golden.
USB-Romservice and/or Retropie-Mount don't work!
Follow this guide, but follow these steps before plugging in your thumb drive:
  • Go to Retropie-Setup
  • Update retropie install script
  • Go to Manage Packages -> Optional Packages
  • Scroll all the way down to usbromservice
  • Uninstall usbromservice
  • Install it again from Binary
  • Once finished, choose Configuration, then Enable USB Romservice
  • Reboot, and wait for it to fully boot in to ES
  • Plug in USB stick (has to be FAT32) and WAIT A LONG TIME (if your stick has a light, wait for it to stop flashing)
submitted by ErantyInt to crtgaming [link] [comments]

CRTPi-VGA v2.5V -- For that VGA Monitor in your Attic

CRTPi Project Presents:

CRTPi-VGA v2.5V

A CRTPi image for running 240p on VGA CRT monitors
Other Releases:
Changelog: v2.5V for VGA-666 05/05/2020
Changelog: v2.0VX for VGA-666 03/21/2020
Required Hardware:
What is this?
Since I've been relegated to working from home for the next forever, I needed something to pass the time. Lots of users have asked for, and worked with me to create a solution for what we'll call the "Poor Man's BVM." A $5 Gert VGA666 adapter, cheap/free 31khz VGA Monitor, and a Pi packed with roms. What could be a better way to pass the quarantine?
For a long time, there were several stumbling blocks:
I finally stumbled upon some old threads with people listing out some 640x480 hdmi_timings, and that cracked the whole case wide open. I finally had the missing piece that could be slotted into my existing images. The end result is Emulationstation and other non-libretro emulators launching in 640x480p @ 65hz (great for PSP, DOSbox, ScummVM, and Kodi!) and all Retroarch emulators launching in 2048x240p or 1920x240p @ 120hz.
I opted to steer away from Black Frame Insertion and instead change the VSync Swap interval to 2 (running the framerate at half of 120hz). This solves the intermittent flicker and also the reduced gamma from BFI. Overall, it's a much more pleasing experie