Write behind cache, DB down, when should the system stop taking new data in

Hello:
We are trying to use Coherence for our custom ESB, which is brokering payloads of various size between consumer and provider applications.
Before Coherence, stopping our DB meant organization-wide outage for critically important business services.
Since we have at least 40G of RAM in production environment, we believe that our app
can use Coherence write-behind option for tolerating at least several hours worth of DB outage.
We are currently using a near cache backed by distributed cache in write-behind mode.
9 business service JVMs (storage enabled=false) use 30 storage enabled JVMs.
IMPORTANT: We need to create an automated alerting facility determining when
amount of unsaved data reaches critical level since DB goes down. This alert should help us decide when our application stops accepting inbound traffic.
It is hard to use QueueSize parameter for that because our payload memory footprint can vary from 1KB to 3MB.
We do not expire any entries in order to enable support queries against the cache during DB outage.
Our experiments with trying various flavors of overflow-scheme resulted in OutOfMemoryError, therefore
we decided to implement RAM-only cache as a first step.
<near-scheme>
<scheme-name>message_payload_scheme</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>limited_entities_front_scheme</scheme-ref>
<high-units>100</high-units>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>limited_bytes_scheme</scheme-ref>
<high-units>199229440</high-units>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.comp.MessagePayloadStore</class-name>
</class-scheme>
</cachestore-scheme>
<read-only>false</read-only>
<write-delay-seconds>3</write-delay-seconds>
<write-requeue-threshold>2147483646</write-requeue-threshold>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</back-scheme>
</near-scheme>
<local-scheme>
<scheme-name>limited_entities_front_scheme</scheme-name>
<eviction-policy>LRU</eviction-policy>
<unit-calculator>FIXED</unit-calculator>
</local-scheme>
<local-scheme>
<scheme-name>limited_bytes_scheme</scheme-name>
<eviction-policy>HYBRID</eviction-policy>
<unit-calculator>BINARY</unit-calculator>
</local-scheme>

Good info ... I feel like I need to restate my original question along with a couple of new questions caused by the discussion above.
Q1. Does Coherence evict 'dirty', or 'queued', or 'unsaved' objects for cache configuration provided above?
The answer should be 'NO', otherwise Coherence is unsafe to use as a system of record,
it should not just drop unsaved information on the floor.
Q2. What happens to the front tier of the near+partitioned write behind cache described above when amount of unsaved data exceeds max cache capacity defined via high-units?
I would expect that map.put starts throwing exceptions: cache storage is full, so it should not accept more data
Q3. How can I determine a moment when amount of dirty data in bytes(!), not in objects, hits 85% of
max allowed cache capasity configured in bytes (using high-units param and BINARY calculator).
'DirtyUnits' counter can probably be built with some lower-level Coherence API. Can we use
this API?
Please, understand, that we purchased Coherence for reliability, for making our
system independent from short DB outages, for keeping our business services up
and running when DBA need some time for admin operations like rebuilding an index.
Performance benefits are secondary and are not as obvious for our system which
uses primary keys only and has a well-tuned co-located Oracle back-end.
We simply cannot put Coherence to production unless we prove that Coherence
can reliably hold the data and give us information about approaching crisis
(the cache full of unsaved data).
If possible, forward this message to Cameron Purdy,
who was presenting Coherence to our team several moths ago.
Thanks,
Vasili Smaliak
Applications Architect, Enterprise App Integration
GMAC ResCap
[email protected]

Similar Messages

  • When should the System State be explicitly backed up?

    I'm trying to find out what is covered by the BMR and System State options in DPM 2012 R2. According to the below blog post BMR includes the System State. But what does the System State include? Would you ever need to back it up if you've included BMR
    in the PG?
    By default the BMR will protect the "critical system data" According to the Microsoft definition -> critical data =  System State and C drive and all other drivers that contain system data directories (Program files).
    http://scdpm.blogspot.co.uk/2011/11/bare-metal-recovery-bmr-option-in-dpm.html

    Hi
    Systemstate includes:
    http://technet.microsoft.com/en-us/library/cc938537.aspx
    BMR ist an additional Option which gives you the possibility to Recovery the Systemstate (which is included in BMR) on a different Hardware
    See this:
    http://blogs.technet.com/b/askcore/archive/2011/05/12/bare-metal-restore.aspx
    Seidl Michael | http://www.techguy.at |
    twitter.com/techguyat | facebook.com/techguyat

  • Write-Behind Caching and Old Values

    Is there a way to access the old value cached in the write-behind cache for the same key from the CacheStore's store() or storeAll() method?

    I have a business POJO with three parts: partA,     > partB, partC inside. Each of these three parts is
         > persisted by a separate SQL. So, every time I persist
         > my POJO, up to 3 SQLs may be executed.
         I understand.
         > When a change happens in my POJO, it goes onto the
         > write-behind queue. In my CacheStore.store() or
         > CacheStore.storeAll() I would like to be able to make
         > an intelligent decision about which of the three
         > parts: partA, partB or partC has actually changed and
         > only run the SQL updates for the changed parts. This
         > would allow me to avoid massive amounts of
         > unnecessary SQL updates for the parts that did not
         > change.
         Right. Keep in mind that there are two conditions that you must be aware of:
         1) Multiple updates could have occurred to the object, meaning that the database update would have to "roll up" the results of multiple changes to the object.
         2) Some or all of the updates could have already occurred to the database. This may be a little trickier to understand, but it reflects the possible machine failure conditions that occurred while a write-behind was in progress.
         Although the latter are unlikely, they should be accounted for, and of course they are harder to test for with certainty. As a result, the updates to the information (the CacheStore implementation) must be built in an "idempotent" manner, i.e. allowing it to be executed more than once with no additional side-effects.
         > If I had access to the POJO stored under the same key
         > before the new value was put in cache, I could use
         > equals() on each of the three parts to find out
         > exactly which one of them changed.
         While this is true, you would need to compare the "known previous database state" version, not just the "old" version.
         > Of course, if this functionality is not available, I
         > would have to create dirty flags for each of the
         > three POJO parts. But I can't really clear my POJO's
         > flags and recache the POJO from within the store() or
         > storeAll(), right?
         Yes, but remember that those flags are "could be dirty" flags, because of the above failure modes that I described.
         Peace,
         Cameron Purdy
         Tangosol Coherence: The Java Data Grid

  • Write-behind cache not removing entries after upgrade to 3.2

    We recently upgraded tangosol.jar and coherence.jar from version 3.0 to version 3.2. After the upgrade, our write-behind caches began consuming all available memory and crashing the JVMs because the entries were not being removed from the cache after being written to the database. We rolled back to the 3.0 jars without making any other modifications and the caches behave as expected. We'd really like to move to 3.2 for the improved network fault tolerance, but we need to resolve this issue first.
    What changes were made in 3.2 with respect to write-behind caches that might cause this issue? I've reviewed our configuration and our code and can't find anything unusual, but I'm not sure what I should be looking for.
    Any ideas?

    I've opened an SR, but I haven't heard back. In the meantime, I've continued digging and I've noticed something strange - in the store() method of our backing map implementation, we take the entry that we just persisted and remove it from the backing map.
    In my small-scale local tests, the size of the map is 1 when we enter store() and is 0 when we leave, as expected. If we process another entry using the 3.0 jars, it's again 1 and then 0. However, it gets more interesting with the 3.2 jars - the size of the map is 1 when we enter store() the first time and 0 when we leave, but if we process another entry, the size is 2 when we enter and 1 when we leave. This pattern continues such that both values increase by 1 every time we process an entry.
    This would imply that we're either removing the entries incorrectly, or they're somehow being reinserted into the map.
    Any ideas?
    Here's the body of our method (with a bunch of sysouts added to the normal logging because this app won't run correctly under a debugger):
            * Store the specified value under the specific key in the underlying
            * store, then remove the specific key from the internal map and hence
            * the cache itself. This method is intended to support both key/value
            * creation and value update for a specific key.
            * @param oKey   key to store the value under
            * @param oValue value to be stored
            * @throws UnsupportedOperationException if this implementation or the
            *                                       underlying store is read-only
            public void store(Object oKey, Object oValue)
                RemoveOnStoreRWBackingMap mapBacking = RemoveOnStoreRWBackingMap.this;
                System.out.println("map storing  " + oKey);
                System.out.println("Size before = " + mapBacking.entrySet().size());
                Iterator entries = mapBacking.entrySet().iterator();
                while (entries.hasNext()) {
                    System.out.println("entry = " + entries.next());   
                String storeClassName = getCacheStore().getClass().getName();
                Logger log = Logger.getLogger(storeClassName);
                log.debug(storeClassName + ": In store method.  Storing " + oKey);
                long cFailuresBefore = getStoreFailures();
                log.debug(storeClassName + ": failures before=" + cFailuresBefore);
                super.store(oKey, oValue);
                long cFailuresAfter = getStoreFailures();
                log.debug(storeClassName + ": failures afer=" + cFailuresAfter);
                if (cFailuresBefore == cFailuresAfter)  {
                    log.debug(storeClassName + ": About to remove");
                    mapBacking = RemoveOnStoreRWBackingMap.this;
                    Converter converter = mapBacking.getContext().getKeyToInternalConverter();
                    System.out.println("removed " + mapBacking.remove(converter.convert(oKey)));
    //                System.out.println("removed " + mapBacking.getInternalCache().remove(converter.convert(oKey)));
                    log.debug(storeClassName + ": Removed");
                Converter converter = RemoveOnStoreRWBackingMap.this.getContext().getKeyFromInternalConverter();
                System.out.println("Size after = " + mapBacking.entrySet().size());
            }

  • Write-Behind Caching and Re-entrant Calls

    Support Team -
         The Coherence User Guide states that:
         "The CacheStore implementation must not call back into the hosting cache service. This includes OR/M solutions that may internally reference Coherence cache services. Note that calling into another cache service instance is allowed, though care should be taken to avoid deeply nested calls (as each call will "consume" a cache service thread and could result in deadlock if a cache service threadpool is exhausted)."
         I have Load-tested a use case wherein I have two caches: ABCache and BACache. ABCache is accessed by the application for write operation, BACache is accessed by the application for read operation. ABCache is a write-behind cache whose CacheStore populates BACache by reversing key and value of each cache entry stored in the ABCache.
         The solution worked under load with no issues.
         But can I use it? Or is it too dangerous?
         My write-behind thread-count setting is left at default (0). The documentation states that
         "If zero, all relevant tasks are performed on the service thread."
         What does this mean? Can I re-enter the caching service if my thread-count is zero?
         Thank you,
         Denis.

    Dimitri -
         I am not sure I fully understand your answer:
         1. "Your test worked because write-behing backing map invokes CacheStore methods asynchronously, on a write-behind thread." In my configuration, I have default value for thread-count, which is zero. According to the documentation, that means that CacheStore methods would be executed by the service thread and not by the write-behind thread. Do I understand this correctly?
         2. "If will fail if CacheStore method will need to be invoked synchronously on a service thread." I am not sure what is the purpose of the "service thread". In which scenarios the "CacheStore method will need to be invoked synchronously on a service thread"?
         Thank you,
         Denis.

  • Write-Behind Caching and Multiple Puts

    What happens when two consecutive puts are performed on the write-behind cache for the same key? Will CacheStore's store() or storeAll() be invoked once for every put() or only once for the last put() (the one which overrode the previous cached values)?

    Hi Denis,
         If you use write-behind, there will be no unnesessary database updates - only the last put() will result in database update.
         Regards,
         Dimitri

  • Write-Behind Caching and Limited Internal Cache Size

    Let's say I have a write-behind cache and configure its internal cache to be of a fixed limited size, e.g. 10000 units. What would happen if more than 10000 units are added to the write-behind cache within the write-delay period? Would my CacheStore's storeAll() get all of the added values or would some of the values be missed because of the internal cache size limitation?

    Hi Denis,     >
         > If an entry is removed while it is still in the
         > write-behind queue, it will be removed from the queue
         > and CacheStore.store(oKey, oValue) will be invoked
         > immediately.
         >
         > Regards,
         > Dimitri
         Dimitri,
         Just to confirm, that I understand it right if there is a queued update to a key which is then remove()-ed from the cache, then the following happens:
         First CacheStore.store(key, queuedUpdateValue) is invoked.
         Afterwards CacheStore.erase(key) is invoked.
         Both synchronously to the remove() call.
         I expected only erase will be invoked.
         BR,
         Robert

  • Why can I no longer, after having downloaded Lion, write accents and other diacriticals when in the Google, Yahoo, FaceBook, or even here in this post? The new method for getting at the "option" symbols works fine in other places like spotlight, but now e

    Why can I no longer, after having downloaded Lion, write accents and other diacriticals when in the Google, Yahoo, FaceBook, or even here in this post? The new method for getting at the "option" symbols works fine in other places like spotlight, but now even the older "option+letter" doesn't work in most places.

    Chrome doesn't support the new accent/diacritics/macron chooser. I'm not sure about other browsers such as Firefox. You can use the old Option+letter combination that Doug suggested. Hopefully updates will solve these little incompatibilities shortly.
    Neill

  • Photoshop CS6 shuts downs when using the move tool

    Photoshop CS6 shuts downs when using the move tool. It seemas to run fine until i try and use the move tool. PS worked fine just a few weeks ago and now it has this issue. What can i do to resole this issue?
    Thnaks,
    System Specs
    Windows XP Home SP3
    AMD Atholon 64 X2 Dual
    2GB RAM
    Video Cards
    Radeon 7200
    NVIDIA GeForce 8400GS

    Two displays are OK on one adapter, Two displays adapters are not OK. http://helpx.adobe.com/photoshop/kb/gpu-opengl-support-photoshop-cs4.html
    7. If you are using more than one video adapter, remove the additional cards.
    Multiple video adapters can cause problems when OpenGL and Photoshop use the GPU. It's best to connect two (or more) monitors into one video adapter. If you have to use more than one video adapter, make sure that they are the same make and model. Also make sure that they both support the same versions of OpenGL and Shader Model. Otherwise, crashes and other problems can occur in Photoshop.
    Note: Using two video adapters does not enhance Photoshop's performance.

  • Software shuts down when using the lightning effects filter. Help!

    Software shuts down when using the lightning effects filter. Help!

    Have you done a Forum search yet?
    Is your GPU driver up to date?
    What are your version of Photoshop, OS and GPU anyway?

  • MBP suddenly shuts down when using the superdriveP shuts down when u

    Randomly my husband:s MBP shuts down when using the super drive. What could be the cause of this?
    Appreciatte your Help

    Yeah, I hope it is just a firmware-problem. AND that Nokia will release an update soon. The version on my phone now is 021.013, and it is from July. A firmware update is long overdue now, especially after all the other problems reported with this phone. And btw I have turned the phone on and off several times without this helping to fix the radio-issue...
    I guess I will have to return the phone soon, if Nokia is unable to release a fixed firmware.

  • Kernel-Power Event ID 41: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.

    Hello,  Currently we are seeing this issue with a couple of our Lenovo T420s laptops with a Solid State Drive.  ruffly about 10 or so.  The Reboots happen randomly and do not create a dump file.  We have contacted Lenovo and they are
    not sure why its happening.  Since this Crash I set it for Minidump this did not work my next steps will be to Disable Automatic restart on System Failure to see if it brings anything up.  I am also looking at using Procmon to dump to a file as well. 
    If anyone has any other ideas please let me know.
    The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.
    + System
      - Provider
       [ Name]  Microsoft-Windows-Kernel-Power
       [ Guid]  {331C3B3A-2005-44C2-AC5E-77220C37D6B4}
       EventID 41
       Version 2
       Level 1
       Task 63
       Opcode 0
       Keywords 0x8000000000000002
      - TimeCreated
       [ SystemTime]  2012-02-01T00:02:48.677610900Z
       EventRecordID 8270
       Correlation
      - Execution
       [ ProcessID]  4
       [ ThreadID]  8
       Channel System
       Computer LR8K6TLC.cntr.thrivent.corp
      - Security
       [ UserID]  S-1-5-18
    - EventData
      BugcheckCode 0
      BugcheckParameter1 0x0
      BugcheckParameter2 0x0
      BugcheckParameter3 0x0
      BugcheckParameter4 0x0
      SleepInProgress false
      PowerButtonTimestamp 129725280988099400
    Event 89, Kernel-Power
    ACPI thermal zone ACPI\ThermalZone\THM0 has been enumerated.            
    _PSV = 0K            
    _TC1 = 0            
    _TC2 = 0            
    _TSP = 0ms            
    _AC0 = 0K            
    _AC1 = 0K            
    _AC2 = 0K            
    _AC3 = 0K            
    _AC4 = 0K            
    _AC5 = 0K            
    _AC6 = 0K            
    _AC7 = 0K            
    _AC8 = 0K            
    _AC9 = 0K            
    _CRT = 371K            
    _HOT = 0K            
    _PSL - see event data.
    ---- Details
    +
    System
    Provider
    [ Name]
    Microsoft-Windows-Kernel-Power
    [ Guid]
    {331C3B3A-2005-44C2-AC5E-77220C37D6B4}
    EventID
    89
    Version
    0
    Level
    4
    Task
    86
    Opcode
    0
    Keywords
    0x8000000000000020
    TimeCreated
    [ SystemTime]
    2012-02-01T00:02:49.270411900Z
    EventRecordID
    8271
    Correlation
    Execution
    [ ProcessID]
    4
    [ ThreadID]
    68
    Channel
    System
    Computer
    LR8K6TLC.cntr.thrivent.corp
    Security
    [ UserID]
    S-1-5-18
    EventData
    ThermalZoneDeviceInstanceLength
    21
    ThermalZoneDeviceInstance
    ACPI\ThermalZone\THM0
    AffinityCount
    1
    _PSV
    0
    _TC1
    0
    _TC2
    0
    _TSP
    0
    _AC0
    0
    _AC1
    0
    _AC2
    0
    _AC3
    0
    _AC4
    0
    _AC5
    0
    _AC6
    0
    _AC7
    0
    _AC8
    0
    _AC9
    0
    _CRT
    371
    _HOT
    0
    _PSL
    0000000000000000
    Thank you.

    We have tested and checked both the Bios and firmware of the SSD Drive's
    Bios was up to date and no issue
    Firmware was also up to date as well.  
    Users are still experiencing random Reboots.   I tried to capture the issue with Procmon but since the PC shutdown (Goes to a black screen no power at all even when turning the "Automatically
    restart" off under Startup and Recovery) No dmp files as of yet.  Unable to configure Procdump due to not knowing where the issue is and what is causing it to happen.
    Going to replace one of the PC's with a New one with different hardware to see if this resolve the issue.  If anyone has any idea's to be able to capture what maybe happening that would
    be great.
    Thank you.

  • Conversion failed when converting the varchar value 'undefined' to data typ

    Conversion failed when converting the varchar value 'undefined' to data type int.
    hi, i installed oracle insbridge following the instruction in the manual. in rate manager, when i tried to create a new "Normal rating" or "Underwriting", im getting the following exception
    Server Error in '/RM' Application.
    Conversion failed when converting the varchar value 'undefined' to data type int.
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    Exception Details: System.Data.SqlClient.SqlException: Conversion failed when converting the varchar value 'undefined' to data type int.
    Source Error:
    An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
    Stack Trace:
    [SqlException (0x80131904): Conversion failed when converting the varchar value 'undefined' to data type int.]
    System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1948826
    System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4844747
    System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194
    System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392
    System.Data.SqlClient.SqlDataReader.HasMoreRows() +157
    System.Data.SqlClient.SqlDataReader.ReadInternal(Boolean setTimeout) +197
    System.Data.SqlClient.SqlDataReader.Read() +9
    System.Data.SqlClient.SqlCommand.CompleteExecuteScalar(SqlDataReader ds, Boolean returnSqlValue) +50
    System.Data.SqlClient.SqlCommand.ExecuteScalar() +150
    Insbridge.Net.Fwk.DAO.DataAccess.ScalarQuery(String connectionString, String command, Transaction transType, Object[] procParams) +110
    [Exception: Cannot Execute SQL Command: Conversion failed when converting the varchar value 'undefined' to data type int.]
    Insbridge.Net.Fwk.DAO.DataAccess.ScalarQuery(String connectionString, String command, Transaction transType, Object[] procParams) +265
    Insbridge.Net.Fwk.DAO.SqlProcessor.ExecuteScalarQueryProc(String subscriber, String datastore, String identifier, String command, Transaction transType, Object[] procParams) +101
    Insbridge.Net.Fwk.DAO.SqlProcessor.ExecuteScalarQuery(String subscriber, String identifier, String command) +22
    Insbridge.Net.RM.IBRM.ExeScalar(String cmd, Object[] paramsList) +99
    Insbridge.Net.RM.Components.Algorithms.AlgEdit.Page_Load(Object sender, EventArgs e) +663
    System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14
    System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35
    System.Web.UI.Control.OnLoad(EventArgs e) +99
    System.Web.UI.Control.LoadRecursive() +50
    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627
    my insbridge versions are as follows
    IBRU 4.0.0 Copyright ©2010, Oracle. All rights reserved. - Version Listing
    RMBUILD.DLL 4.0.0.0 (x86)
    SRLOAD.DLL 3.13.0 (x86)
    IBRM v04.00.0.17
    IB_CLIENT v04.00.0.00
    RM.DLL 4.00.0 (x86)
    IBFA 3.0.2
    OS: Windows Server 2003
    DB: Sql Server 2008
    Browser: IE8
    how do i solve this, please help

    This is an error due to conversion failed from character string to int datatype. Infact, the table column contains "NO" value which you are trying to convert to Int/SUM which is illegal. Without your code and table structure, its difficult to pinpoint your
    actual issue. But check, all columns for value "NO". 

  • When I try to save a file the system stops responding.

    When I try to save a file the system stops responding.

    Thanks, Bill. Here's what I get when I try to "save as"

  • When i choose operation mapping  the system prompt no suitable date fund

    when i choose operation mapping  the system prompt no suitable date fund, i can't  find the reason. I hope to get your help

    HI Nathan,
    Your Receiver Interface name is SI_Billhead_In.SI_BillHeadIn and the one you have used in ID is different from the one in Interface Determination.
    Try changing the above and it might work.
    Thanks & Regards,
    Tejas Bisen

Maybe you are looking for

  • Get (variable) file path and name in a text element

    How do you get the (variable) file path and name in a text element (label) in LCD? If you save the PDF and afterwards relocate it, it should update the values. Is that actually possible?

  • Deleting songs from Nano

    Even though I have the boxes "Open I Tunes when this iPod is connected" and "Manually Manage music" ticked, itunes does not launch when My Ipod is mounted. I open iTunes from the dock. Although The summary page is indicating that the ipod is full, I

  • Diff F.13-F.19 and MMPV-MMPI

    Hai Can any one tell me diff.. between F.13 and F.19 and MMPV and MMPI i know that F.13and F.19 used to clear GR /IR account but in which situation we will use this t-codes In the same way we use to close/open posting periods in MM we use MMPV or MMP

  • No audio on new MBP 13'' with Windows 7 - new hardware?

    I have a brand new Macbook Pro 13" (Alu Unibody) where I installed Windows 7 RC using Bootcamp v2.1. This basically worked great and most of the hardware components were identified automatically by Windows (just iSight needed a manual installation fr

  • Problem with etoken key Aladdin

    Good afternoon, etoken doesn't work at the virtual machine Windows 7. USB redirector is installed, in the list of the equipment there is Token JC. But PKI Client doesn't see it. Windows events: Warning: Usb_packet_open::process_return() status code e