Eviction policy or quartz job

I have a cache containing cars and bycycles. All items are stored into the same cache.
We only need to store 100 cars of each color. So 100 red cars, 100 green cars etc...
For the bycicles, we need to keep 1000, regardless the color.
What would be the best way to achieve this:
1. Create a custom eviction policy. Is this really feasible becasue you have 2 different kinds of policies? Also, I'm not sure if it's possible to define a policy that evicts cars based on the number of cars that are in the local cache with the same color. Perhaps with a custom unit calculator?
2. Create a quartz job that runs periodically and filters out the items that need to be removed.
Best regards
Jan

Hi Jan,
a periodic job might be better for the task, but it will not be able to remove the exactly correct amount of entries if there are frequent changes to the cache.
The eviction policy on the other hand is able to act upon the content of the backing map, which in the case of a distributed cache is not corresponding to the number of items in the entire cache, because it contains only those entries for which the primary copy resides in that JVM. Therefore the eviction policy is not able to evict entries based on the data in the entire cache in a distributed cache.
Best regards,
Robert

Similar Messages

  • Problem in clustered weblogic 9.1 servers duplicating quartz jobs

    We have a web application expected to run quartz job to insert one row in database, but it inserted 2 rows in database. The quartz job is kicked off by a servlet that is loaded when weblogic server is started.
    I understand the qaurtz job should be clustered with jdbcjobstore, too.
    My question is:
    if clustered weblogic server is considered as one running instance and the quartz job is kicked by a java servlet that is preloaded when weblogic server started, how could a job shceduled at a specific time (example, 3am daily) will be run twice? i.e. two clustered welogic server each run the job once. Does that mean our weblogic server clustering is not configured right?
    The two clustered weblogic servers are on separate Unix machine and synchronized in time.
    Any idea for the problem? Just weblogic cluster side, not quartz side.
    thanks

    The "jsp:directive.include" tag is only valid in an well-formed XML file that specifies all the relevant namespaces. The "jsp:include" tag is used in a JSP file. I'll bet the beginning of your file shows that you don't have a valid and well-formed XML file.
    If you found that "jsp:include" worked, then that confirms you have an ordinary JSP file here. Why are you trying to use the XML form? The result of "jsp:include" in a JSP file will be exactly the same as the analogous tag in an XML file.

  • Custom Eviction Policy

    Hello,
    I am storing a few Maps into Coherence - each map might have a few hundred entries. The problem i have at this point is with the eviction policies. If i set the high units on the front map of a near cache to be 100 - it never works. It sees 1 entry per map.
    I created a custom eviction policy where it will remove items from my map when the size of the map reaches the high units of that particular near cache. The problem is that this item is no longer present in the front or back map.
    Is there a way to create a custom eviction policy where i can evict entries out of the front map but not the back map?
    Thanks

    When i put my maps into tangosol i just do:
    NamedCache mCache = CacheFactory.getCache(cacheName);
    mCache.put(cacheKey, deepCopy(cacheObj));
    where cacheObject is a Map and i make a deep copy of that map for thread safety.
    The map contains over 100 entries.
    Here is my cache config file:
              <cache-mapping>
                   <cache-name>CouponCache</cache-name>
                   <scheme-name>content-near-with-eviction</scheme-name>
              </cache-mapping>
              <!--
                   Near caching scheme for all Content Caches
              -->
              <near-scheme>
                   <scheme-name>content-near-with-eviction</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>default-front-eviction</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <backing-map-scheme>
                                  <local-scheme>
                                       <scheme-ref>default-back-eviction</scheme-ref>
                                  </local-scheme>
                             </backing-map-scheme>
                        </distributed-scheme>
                   </back-scheme>
                   <!--
                        Specifies the strategy used keep the front-tier in-sync with the back-tier
                   -->
                   <invalidation-strategy>present</invalidation-strategy>
                   <!--
                        It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
                   -->
                   <autostart>true</autostart>
              </near-scheme>
              <!--
                   Default front cache eviction policy scheme.
              -->
              <local-scheme>
                   <scheme-name>default-front-eviction</scheme-name>
                   <!--
                        Least Frequently Used eviction policy chooses which
                        entries to evict based on how often they are being
                        accessed, evicting those that are accessed least
                        frequently first. (LFU)
                        Least Recently Used eviction policy chooses which
                        entries to evict based on how recently they were
                        last accessed, evicting those that were not accessed
                        the for the longest period first. (LRU)
                        Hybrid eviction policy chooses which entries
                        to evict based the combination (weighted score)
                        of how often and recently they were accessed,
                        evicting those that are accessed least frequently
                        and were not accessed for the longest period first. (HYBRID)
                   -->
                   <eviction-policy>
                        <class-scheme>
                             <class-name>com.att.uma.cache.eviction.MyEvictionPolicy</class-name>
                        </class-scheme>
                   </eviction-policy>
                   <eviction-policy>LRU</eviction-policy>
                   <!--
                        Used to limit the size of the cache.
                        Contains the maximum number of units
                        that can be placed in the cache before
                        pruning occurs.
                   -->               
                   <high-units>100</high-units>     
                   <expiry-delay>12h</expiry-delay>
              </local-scheme>
    I did not list all of the methods specified by the interface - as they are irrelevant to my issue.
    Here is my eviction class:
    public class MyEvictionPolicy extends AbstractMapListener implements EvictionPolicy
         private final static Log log = LogFactory.getLog(DefaultUnitCalculator.class);
         LocalCache m_cache = null;
         public MyEvictionPolicy() {}
         public void requestEviction(int cMaximum)
              int cCurrent = m_cache.getHighUnits();
              if(log.isDebugEnabled())
                   log.debug("* requestEviction: current:" + cCurrent + " to:" + cMaximum);
              // eviction policy calculations
              Iterator iter1 = m_cache.entrySet().iterator();
              while(iter1.hasNext())
                   Entry entry = (Entry) iter1.next();
                   if (cCurrent > cMaximum)
                        if((entry.getValue() instanceof Map))
                             Map map = (Map) deepCopy(entry.getValue());
                             Iterator iter2 = ((Map) entry.getValue()).keySet().iterator();
                             while(iter2.hasNext())
                                  if(cCurrent == map.size())
                                       entry.setValue(map);
                                       break;
                                  String key = (String) iter2.next();
                                  map.remove(key);
                                  if(log.isDebugEnabled())
                                       log.debug("* requestEviction: current:" + cCurrent + " to:" + cMaximum);
                   else
                        break;
    }

  • Cache eviction policy to evict data based on LRU

    I've a requirement where i've to evict the data in the cache based on LRU.
    I've used the following configuration in the cache-config.xml
    <?xml version="1.0"?>
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>hello-cache</cache-name>
          <scheme-name>distributed</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <distributed-scheme>
          <scheme-name>distributed</scheme-name>
          <service-name>DistributedCache</service-name>
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>LocalSizeLimited</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </distributed-scheme>
        <local-scheme>
          <scheme-name>LocalSizeLimited</scheme-name>
          <eviction-policy>LRU</eviction-policy>
          <high-units>5</high-units>
          <expiry-delay>2m</expiry-delay>
        </local-scheme>
      </caching-schemes>
    </cache-config>      
    Here i've given the eviction policy as LRU and expiry delay as 2mins.
    I've insert some datas in the cache as
    k1=hello
    k2=world
    k3=sample
    Now i'm using the key k2 often so the eviction should be as follows
    first it should evict k1 and then k3 and then k2 after 2mins..
    But the eviction occurs based on the data i insert
    i.e. first k1 and then k2 and then k3...
    So can any of u tell how to configure for evicting the data based on LRU.
    Thanks
    Godwin

    Hi Godwin,
    you might need to add <flush-delay> to you local schema,
    the default flush-delay is  1 minute, maybe cache evicts entries together due to flush interval.
    local-scheme - Coherence 3.4 User Guide - Oracle Coherence Knowledge Base
    Arkadiy

  • Quartz job in  a clustered evn

    I have a quartz scheduled job in my application .My application is deployed in a clustered env , so based on no of servers no of jobs will be triggered , can I control this so that the job gets fired from any one of the servers and not from all the servers in cluster ? Can the servers in cluster communicate and decide who would fire the job , I don't want several instances for the same job is this possible ?

    Enterprise Quartz scheduler has advanced setting for clustering and load balancing. You should follow Quartz documentation.
    http://www.opensymphony.com/quartz/wikidocs/TutorialLesson11.html
    Thanks
    Vishnu

  • Elastic Data and Eviction

    I understand that Eviction is not supported by ramjournal-scheme or flashjournal-scheme.
    We have a cache that is currently ~300GB in total heap across the cluster and we add several hundred thousand new entries every day averaging ~16KB per entry. We currently use local-scheme's eviction policy to control the size of the cache.
    We would like to use Elastic Data to grow the cache to a couple of terabytes but since eviction is not supported we would need another way to keep the size of the cache constrained. I have been pondering running a weekly job to purge old entries but I'm not sure how to go about it.
    Our entry keys encode the data we need in order to decide which entries should be evicted. Would it be reasonable to call keySet with a filter that would select those keys which should be evicted? The total number of entries would be on the order of 100 million and the matching key set, if run weekly, would be ~2 million. The javadoc says the resulting set may not be backed by the Map so calling clear() won't do the trick. Also, the NamedCache doesn't seem to have a removeAll() method so that's out.
    Any advice would be greatly appreciated.

    llowrey wrote:
    I understand that Eviction is not supported by ramjournal-scheme or flashjournal-scheme.
    We have a cache that is currently ~300GB in total heap across the cluster and we add several hundred thousand new entries every day averaging ~16KB per entry. We currently use local-scheme's eviction policy to control the size of the cache.
    We would like to use Elastic Data to grow the cache to a couple of terabytes but since eviction is not supported we would need another way to keep the size of the cache constrained. I have been pondering running a weekly job to purge old entries but I'm not sure how to go about it.
    Our entry keys encode the data we need in order to decide which entries should be evicted. Would it be reasonable to call keySet with a filter that would select those keys which should be evicted? The total number of entries would be on the order of 100 million and the matching key set, if run weekly, would be ~2 million. The javadoc says the resulting set may not be backed by the Map so calling clear() won't do the trick. Also, the NamedCache doesn't seem to have a removeAll() method so that's out.
    Any advice would be greatly appreciated.Hi Larkin,
    keySet().removeAll() is what you are looking for if you want to do map.removeAll(). The keySet() method was several times earlier claimed to be lazily evaluated to allow keySet().remove and keySet().removeAll() to be a cheaper blind version of remove/removeAll.
    The keySet() method without filtering is lazily backed by the named cache. The keySet(Filter) is not backed by the cache.
    On the other hand, if you know you want to remove the filtered entries, why not just send an entry-processor with a filter filtering for keys you want removed.
    Removing millions of entries all at once may be a problematic operation due to the amount of events and backup traffic it generates, but it is easy to do remove in smaller chunks by writing a filter which selects only at most a certain number of entries by retaining at most that number of entries in the keys parameter of the applyIndex method (applyIndex() is usable for key-only filtering even if you don't have indexes).
    Best regards,
    Robert

  • What is the appropriate data expiration / eviction scheme for the following scenario

    We currently have an expiration scheme through which data entries in cache-A get expired after the cache hits a certain size. The LRU entries are expired after the max threshold is reached. But the issue is that certain entries in the cache are rendered meaningless without the other entries. There is a virtual grouping of entries in this cache.
    Is there an expiration scheme that can be defined through which a group of entries can be deleted in one shot, and how can this policy be enforced.
    The other way i would percieve this to work is to separate the currently virtually grouped entries into their own individual caches. This seems the logical thing to do, except for the issue being that we would still like to keep all these caches in the same service, and have an eviction policy defined at the service level instead of at the cache level.
    Is it possible to define an eviction policy at the service level, for example to satisfy the below scenario.
    Service-A has 10 caches in it [cache-1, cache-2, ....., cache-10]. And there is a limit of 10GB defined on the service. So, if the total utilization of all the caches in Service-A reaches 10GB, an entire cache from amongst [cache-1, cache-2, ....., cache-10] will need to be cleared and the memory be made available. Which one of the caches to be cleared will depend on which cache satisfies the LRU condition. So, maybe the entire cache-5 should be deleted if the entries in this cache was the oldest used.
    Please let me know if the above scenario can be implemented.
    Thanks
    Sumax

    Ricardo Pedro Rodrigues Ferrão wrote:
    > Hi Eugene,
    >
    > I’ll try to explain it better.
    >
    > For example the business unit A is “portable pc” and I have this business unit in both companies and what I want in management consolidation is to consolidate data by the business unit “portable pc” joining data from both companies.
    >
    > Thanks.
    Ricardo,
    I had similar business scenario in my earlier project and the expectations were almost same.
    I created one consolidation area (legal consolidation) under one single data basis and assigned business area as sub-assignment. All the consolidation functions are executed by the business users based on legal cons requirements purpose, while the reports can be generated for both legal and management consolidation.
    In management consolidation reporting, your expectations can be met easily. The header will be business area, while line item data will be companies wise (if you would like the data to display that way).
    For more information, you can check the threads created by me.

  • JDeveloper, ADF, Quartz: Quartz Using different Classpath than Backing bean

    Hi,
    I have an ADF 10g application. When i get an Application module using Configuration.createRootApplicationModule, it works fine in normal java class or backing bean.
    Now i have an quartz configured in this project. The same code does not work in Quartz Job class.
    When i print the classpath using System.getProperty("java.class.path") in normal java class or backing bean, then it considers everything along with BC4J jar files. But when i print the same thing in quartz job, it has only two jar files inside its class path: C:\JDeveloper\j2ee\home\oc4j.jar;C:\JDeveloper\jdev\lib\jdev-oc4j-embedded.jar.
    Can anyone let me know what could be the issue if someone has configured Quartz with JDeveloper and ADF application.
    --Chintan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi!
    Have you solved this problem? If yes, can you share the solution.
    Regrads,
    Sašo

  • OIA jobs.xml constrain execute of file import on only one of two app server

    I am trying to get the jobs.xml and scheduling-context.xml files to fire an import job on one of the two app servers. There are two application servers utilizing the same shared database. Files to import into OIA reside on one of the two application servers file systems only and we want to use the jobs.xml and scheduling-context.xml files on one of the app servers to setup the imports.
    I have these two beans enabled
    <ref bean="usersImportJob"/>
    <ref bean="usersImportTrigger"/>
    This is a snippet from the jobs.xml file
    <bean id="usersImportTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
    <property name="jobDetail">
    <ref bean="usersImportJob"/>
    </property>
    <property name="cronExpression">
    <value>0 0/5 * * * ?</value>
    </property>
    </bean>
    <bean id="usersImportJob" class="org.springframework.scheduling.quartz.JobDetailBean">
    <property name="name">
    <value>Users Import</value>
    </property>
    <property name="description">
    <value>Users import Job</value>
    </property>
    <property name="jobClass">
    <value>com.vaau.rbacx.scheduling.manager.providers.quartz.jobs.IAMJob</value>
    </property>
    <property name="group">
    <value>SYSTEM</value>
    </property>
    <property name="durability">
    <value>true</value>
    </property>
    <property name="jobDataAsMap">
    <map>
    <!-- only single user name can be specified for jobOwnerName (optional)-->
    <entry key="jobOwnerName">
    <value>REPLACE_ME</value>
    </entry>
    <!-- multiple user names can be specified as
    comma delimited e.g user1,user2 (optional)-->
    <entry key="usersToNotify">
    <value>REPLACE_ME</value>
    </entry>
    <entry key="IAMActionName">
    <value>ACTION_IMPORT_USERS</value>
    </entry>
    <entry key="IAMServerName">
    <value>FILE_SERVER</value>
    </entry>
    <!-- Job chaining, i.e. specify the next job to run (optional) -->
    <entry key="NEXT_JOB">
    <value>rolesImportJob</value>
    </entry>
    </map>
    </property>
    </bean>
    QUESTION:_
    Is the IAMServerName value suppose to have the literal hostname of the application server (ex. "oia.bluebird.com") configured in place of the default "FILE_SERVER" text?
    Your insite is greatly appreciated

    Hi Meg,
    I have to say that I've only seen the jobs.xml be defined for data feeds only. It does not make calls out to app/DB servers OOTB.
    The alternative solution are the following:
    1. Use your IDM provisioning connector to the application then pull the data from the IDM product into OIA
    2. Extract the data from the app server using the ETL functionality/engine within the OIA product to extract the data into a data feed (then OIA will pick that up)
    3. Construct some extracting functionality (like a java class) instead of ETL
    Hope this has helped
    Regards,
    Daniel Redfern
    Technicalconfessions.com

  • LMS 3.2 Syslog purge job failed.

    We installed LMS 3.2 on windows server 2003 enterprise edition. RME version is 4.3.1. We have created daily syslog purge job from RME> Admin> Syslog> Set Purge Policy. This job failes with the following error.
    "Drop table failed:SQL Anywhere Error -210: User 'DBA' has the row in 'SYSLOG_20100516' locked 8405 42W18
    [ Sat Jun 19  01:00:07 GMT+05:30 2010 ],ERROR,[main],Failed to purge syslogs"
    After server restart job will again execute normally for few days.
    Please check attached log file of specified job.
    Regards

    The only reports of this problem in the past has been with two jobs being created during an upgrade.  This would not have been something you would have done yourself.  I suggest you open a TAC service request so database locking analysis can be done to find what process is keeping that table locked.

  • Configure cache policy of a distributed cache

    How do I configure the cache policy of a distributed cache, e.g. eviction policy, high units, expiry delay etc? Should I use com.tangosol.util.Cache instead of com.tangosol.util.SafeHashMap as the backing map of the distributed cache?

    Hi Jin,
    I have attached an example of the descriptor used to setup a Distributed Caching scheme. This example shows how to set up both a 'vanilla' Distributed Cache as well as how to setup a 'size-limited/auto-expiry' cache.
    The 'HYBRID' local-scheme will automatically use the com.tangosol.net.cache.LocalCache implementation (a subclass of com.tangosol.util.Cache).
    Later,
    Rob Misek
    Tangosol, Inc.
    Coherence: Cluster your Work. Work your Cluster.<br><br> <b> Attachment: </b><br>distributed-cache-config.xml <br> (*To use this attachment you will need to rename 22.bin to distributed-cache-config.xml after the download is complete.)

  • Quartz in OC4J: javax.naming.NamingException

    We are running Quartz inside OC4J.
    We start OC4J with -userThreads turned on (e.g. java -jar oc4j.jar -userThreads).
    When the Quartz Job fires we do a Context lookup from the inside the Quartz execute() method to find a JMS QueueConnectionFactory and a JMS Queue.
    We are getting the following error on the Context lookup:
    06/02/15 06:40:30 Not in an application scope - start OC4J with the -userThreads switch if using user-created threads
    06/02/15 06:40:30 javax.naming.NamingException: Not in an application scope - start OC4J with the -userThreads switch if using user-created threads
    06/02/15 06:40:30 at com.evermind.server.PreemptiveApplicationContext.getContext(PreemptiveApplicationContext.java:30)
    06/02/15 06:40:30 at com.evermind.naming.FilterContext.lookup(FilterContext.java:126)
    06/02/15 06:40:30 at com.evermind.server.PreemptiveApplicationContext.lookup(PreemptiveApplicationContext.java:42)
    06/02/15 06:40:30 at javax.naming.InitialContext.lookup(InitialContext.java:351)
    06/02/15 06:40:30 at timers.JmsEndpoint.createQueue(JmsEndpoint.java:404)
    06/02/15 06:40:30 at timers.JmsEndpoint.createQueueSender(JmsEndpoint.java:82)
    06/02/15 06:40:30 at timers.TaskQueueScheduler.execute(TaskQueueScheduler.java:149)
    06/02/15 06:40:30 at org.quartz.core.JobRunShell.run(JobRunShell.java:195)
    06/02/15 06:40:30 at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:520)
    We have tried the oracle.j2ee.rmi.RMIInitialContextFactory and the oracle.j2ee.naming.ApplicationClientInitialContextFactory naming factories and get the same error with both.
    Any ideas?
    Thanks, Ed

    Avi
    I tried the default Context factory as you suggested and that didn't work either. I think the problem is related to the fact that Quartz is running on it's own thread within the same JVM as OC4J and may not have scope to the same Context object(s) that OC4J has.
    I did find a way to make this work. Before calling the Quartz Scheduler I used the OC4J Context object to lookup a JMS Queue and put the JMS QueueSender object into the Quartz JobDataMap object. I then scheduled the Quartz timer passing in the JobDataMap object. When the timer fires and my Quartz Job execute() method is called I retrieve the JMS QueueSender object from the JobDataMap and use it to send a JMS message.
    Thanks for your help,
    Ed

  • How to use quartz jobscheduling framework

    how to use quartz job scheduling framework to send mails to a recipint when he places an order.
    give idea abt how it is going to detect the placing of order.is there any listener to detect that
    Edited by: sivaprasadrao on Jun 30, 2008 3:15 AM

    Hi,
    How does he places an order ?. It may be some block of your code. Include your mail function in that block.
    Regards,
    Ram.

  • Delete Job not owned by application development

    I have 2 jobs that was recently moved to production tier. These 2 jobs are for debugging/test jobs and was not supposed to moved to production. Problem is in production I have only view (read) access and our administrator cannot see the jobs only I can see them in tidal. They can see the jobs in the transporter but not in tidal. How can our administrator delete the jobs please advise.
    I have a screen shot on the attached file.
    Thank you,
    Warren

    Hi Warren,
    Work with your security team as a Super User/Admin with sufficient rights is required to temporarily grant you access to Delete Jobs in Production, impersonate you to delete the jobs, then restore original privileged rights.
    Note: If you have a Developer security policy, then a Super User/Admin should assign you to a different security policy where it is very limited as far as access and the number of users, and modify that policy to Delete Jobs in Production, impersonate you to delete the jobs, then restore original privileged rights. It's a good practice not to migrate jobs into Pre-Prod or Production as an individual user.
    BR,
    Derrick Au

  • Quartz Vs JMS ?

    Hi All,
    I like to know the differences between Quartz Job Scheduler and JMS (Java Message Service). Quartz is lacking some features which is there in JMS (such as queuing,dis-allowing of concurrent execution of jobs). When to go for Quartz Job Scheduler rather than using JMS Technology.
    I like to use any one scheduler in my program. I like to know your suggestions and comments on this.
    Is any othere third party open source is available to achieve the scheduling - queuing, dis-allowing concurrent execution - the number of jobs scheduled may be more than 300 or 500 .
    Thanks,
    J.Kathir

    I like to know the differences between
    es between Quartz Job Scheduler and JMS (Java Message
    Service). Quartz is lacking some features which is
    there in JMS (such as queuing,dis-allowing of
    concurrent execution of jobs). When to go for Quartz
    Job Scheduler rather than using JMS Technology.If the purpose is to only schedule the jobs, Quartz should go well. If you really have constraints like preventing concurrent execution of jobs, you should think of having a JMS based solution.
    A Google search brings up this link.
    http://www.redwood.com/information/java_scheduler.htm

Maybe you are looking for

  • Help!Can not start the database by sqlplus!!!ORA-12514:

    OS:window xp oracle 10g sqlplus /nolog SQL*Plus: Release 10.1.0.2.0 - Production on 星期六 5月 20 15:08:26 2006 Copyright (c) 1982, 2004, Oracle. All rights reserved. SQL> connect sys/demo@demo as sysdba ERROR: ORA-12514: TNS:listener does not currently

  • Deleting Duplicate entries from Internal tbale

    Hi All, I have used this code to delete duplicate entries from an internal table.   DELETE ADJACENT DUPLICATES FROM IT_KOSTL COMPARING KOSTL hours. After this statment, still the internal table will remain with a duplicate row. Earlier table content

  • Storage *.vi-drive​r in custom file

    hello, In my project I work with a plugin principle. Each plugin reach to a measurement sensor. The plugin contains all info about the manufacturer, all equations for a m-file that communicates with matlab, and further parameters like visualizations

  • Problem in Inv O/P Condn Rec--"Debit/Credit Key Combination not appearing"

    Hi,   I have done the following settings for Inventory Mgmt:Output determination:   Print control settings as follows:   1. Defined Print items for each output type (eg. A,B,C etc.)   2. Defined Version Print Indicator = 3 (Collective Slip) for MIGO

  • Where is the liquid indicator on  iphone 5c?

    Where is the liquid indicator on Iphone 5c?