How to limit Write-Behind batch

We have a scenario:
we use read-write-backing-map-scheme having write-delay 60s.
System insert a lot of data and then time comes to write data coherence find 40-50 k of unsaved record and pass them all to cachestore.
Due to data volume or database busyness cashstore may work some time several seconds for instance.
<read-write-backing-map-scheme>
<scheme-name>TicketDatabaseScheme</scheme-name>
<scheme-ref>DefaultDatabaseScheme</scheme-ref>
<!--<write-delay>1m</write-delay>-->
<cachestore-scheme>
<class-scheme>
<class-name>com.griddynamics.ticketon.app.dao.coherence.TicketCacheStore</class-name>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
<read-write-backing-map-scheme>
<scheme-name>DefaultDatabaseScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>LocalScheme</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<write-delay>60s</write-delay>
</read-write-backing-map-scheme>
In the case we experience "Terminating guarded execution" followed by service termination.
2010-02-11 09:26:52.223/511.457 Oracle Coherence GE 3.5.2/463 <Error> (thread=DistributedCache:TicketonCache, member=2): Terminating guarded execution (due to hard timeout 1924ms ago) of Daemon{Thread="Thread[WriteBehindThread:CacheStoreWrapper(com.griddynamics.ticketon.app.dao.coherence.TicketCacheStore),5,WriteBehindThread:CacheStoreWrapper(com.griddynamics.ticketon.app.dao.coherence.TicketCacheStore)]", State=Running}
2010-02-11 09:26:52.225/511.459 Oracle Coherence GE 3.5.2/463 <Error> (thread=Termination Thread, member=2): Write-behind thread timed out; stopping the cache service
2010-02-11 09:26:52.226/511.460 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache:TicketonCache, member=2): Service TicketonCache left the cluster
INFO 09:26:52,227 [http--80-22$27432016 DaoCoherenceImpl] - PROFILE_doCreatetickets putAll 200 tickets time 3444 time per 10 objects 172
INFO 09:26:52,227 [http--80-22$27432016 DaoCoherenceImpl] - PROFILE event and 1000 tickets ctreated time 9668
Broadcast Message from root (msglog) on ip-10-226-137-172 Thu Feb 11 09:57:18...ets putAll 200 tickets time 3365 time per 10 objects 168
2010-02-11 09:26:52.228/511.462 Oracle Coherence GE 3.5.2/463 <Info> (thread=httTHE SYSTEM ip-10-226-137-172 IS BEING SHUT DOWN NOW ! ! !et
Log off now or risk your files being damagedence GE 3.5.2/463 <Info> (thread=http--80-27$25787595, member=2): Restarting Service: TicketonCache
INFO 09:26:52,229 [http--80-20$15974570 DaoCoherenceImpl] - PROFILE_doCreatetickets putAll 200 tickets time 3447 time per 10 objects 172
INFO 09:26:52,289 [http--80-22$26935588 BackingBeanSuper] - request HttpRequest[22]
[09:26:53.446] {http--80-35$24027494} java.lang.RuntimeException: Failed to start Service "TicketonCache" (ServiceState=SERVICE_STOPPED)
[09:26:53.446] {http--80-35$24027494} at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.waitAcceptingClients(Service.CDB:12)
Resume: coherence crashes in case far from fatal.
Questions
1. May i limit the size of a batch passed to cachestore?
2. Is it possible to configure the timeout "(due to hard timeout 1924ms ago)"
3. Is it possible to handle like this some way to prevent self killing of coherence cluster.

Thank you Mark you are vary helpfull
Did you mean that (lower) by "bundle strategy"?
<cachestore-scheme>
<class-scheme>
<class-name>com.griddynamics.ticketon.app.dao.coherence.TicketCacheStore</class-name>
</class-scheme>
<operation-bundling>
<bundle-config>
<operation-name>store</operation-name>
<preferred-size>5000</preferred-size>
<auto-adjust>true</auto-adjust>
</bundle-config>
</operation-bundling>
</cachestore-scheme>
And if yes is it looks sense?
I mean by this, "send records to TicketCacheStore by 5000 per call " am i right?
I dropped delay to 10s and set factor to 0.5
Not coherece send me 5-20k records and cachestore handle whis successfuly.
But! By diferent means it may work longer sometimes, some lock in database for instance.
I want to find durable solution for the case, not only lower a chance i meet one.
Issuing heartbeat from cachestore looks best for me now.
I find that default guardian timeout is 65s and it is not looks as good idea to make it higher.

Similar Messages

  • Can you limit write-behind batch sizes to a set number in Coherence 3.5?

    I'm currently running Coherence 3.5.3p9 on Windows.
    The cache store is set up to use the write-behind scheme via the read-write-backing-map-scheme tag.
    Batching is enabled with a write-delay of 5s.
    My understanding is that essentially anything that was newly inserted into the cache more than 5 seconds ago becomes eligable for storage.
    Our application goes through a bit of a peak-trough cycle. Sometimes very little data is inserted all at once and sometimes a lot is. This results in quite varying batch sizes and big batch sizes do cause issues on our db from time to time.
    I can decrease the write-delay to 1s in the hope that this will in turn decrease the batch sizes, but is there a way to set a specific number e.g. I only ever want to write 20 in a batch?

    Hi,
    you can just break down that big batch into smaller batches (DB transactions) yourself, and you can also decide that you don't want to write more at the moment.
    If you throw an exception Coherence will retry whatever is in the parameter map passed to storeAll and parameter collection to eraseAll. But it does not have to be the full list, it is expected that you remove those entries/elements which you have successfully persisted.
    This way you can control the rate of writing yourself. Also, since the write-behind thread does not block event processing, therefore you are sort of safe to do longer waits in those methods if you want to somewhat space out resource transactions without returning from storeAll.
    To answer your question:
    You can either
    - do a physical transaction of 20 elements, remove those 20 elements from the map, then optionally wait and then continue with more elements from the map as long as there are any (this gives you the chance of controlling the rate of transactions).
    - send a physical transaction of at most 20 elements to the database, remove those 20 elements from the map and then throw a dummy exception (in which case Coherence requeues the rest... take care, after this they are considered freshly changed entries).
    Best regards,
    Robert

  • How to force write-behind store on cache node shutdown?

    Hi,
    I built a small pilot project based on Coherence and now I test it for failover. I found replication issues with Distributed cache in the following scenario:
    - start cache node 1 (JVM instance 1);
    - connect Extend client to it and get 1 object from cache (only 1 object in the cache - loaded by CacheStore from DB);
    - change the object and put it back (I use EntryProcessor for this);
    - start cache node 2 (JVM instance 2);
    - stop cache instance 1 (write-behind store wasn't invoked yet: write-delay = 2m);
    - load/change the same object on node 2; all changes done on node 1 are lost.
    My expectation was that cache will replicate its data between nodes when new member joins cache cluster. The backup count = 1 by default, right?
    What should I do in order to prevent such behavior? Is it possible to force write-behind store on cache node shutdown event?
    Thanks, Denis.
    My cache-config, just in case:
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>AccountCache</cache-name>
    <scheme-name>account-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>account-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>account-pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-name>AccountDatabaseScheme</scheme-name>
    <internal-cache-scheme>
    <local-scheme>
    <!--scheme-ref>default-eviction</scheme-ref-->
    <eviction-policy>LRU</eviction-policy>
    <high-units>0</high-units>
    <expiry-delay>30m</expiry-delay>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.roox.bss.cache.store.AccountCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>dburl_</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>user</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>password</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    <write-delay>2m</write-delay>
    <write-batch-factor>.5</write-batch-factor>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port>9098</port>
    <reuse-address>true</reuse-address>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>account-pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>

    solved with autostart=true

  • Write-Behind batch behavior in EP partition level transactions

    Hi,
    We use EntryProcessors to perform updates on multiple entities stored in the same cache partition. According to the documentation, Coherence handles all the updates in a "sandbox" and then commits them atomically to the cache backing map.
    The question is, when using write-behind, does Coherence guarantee that all entries updated in the same "partition level transaction" will be present in the same "storeAll" operation?
    Again, according to the documentation, the write-behind thread behavior is the following:
    The thread waits for a queued entry to become ripe.
    When an entry becomes ripe, the thread dequeues all ripe and soft-ripe entries in the queue.
    The thread then writes all ripe and soft-ripe entries either via store() (if there is only the single ripe entry) or storeAll() (if there are multiple ripe/soft-ripe entries).
    The thread then repeats (1).
    If all entries updated in the same partition level transaction become ripe or soft-ripe at the same instant they will all be present in the storeAll operation. If they do not become ripe/soft-ripe in the same instant, they may not be all present.
    So, it all depends on the behavior of the commit of the partition level transaction, if all entries get the same update timestamp, they will all become ripe at the same time.
    Does anyone know what is the behavior we can expect regarding this issue?
    Thanks.

    Hi,
    That comment is still correct for 12.1 and 3.7.1.10.
    I've checked Coherence APIs and the ReadWriteBackingMap behavior, and although partition level transactions are atomic, the updated entries will be added one by one to the write behind queue. In each added entry coherence uses current time to calculate when each entry will become ripe, so, there is no guarantee that all entries in the same partition level transaction will become ripe at the same time.
    This leads me to another question.
    We have a use case where we want to split a large entity we are storing in coherence into several smaller fragments. We use EntryProcessors and partition level transactions to guarantee atomicity in operations that need to update more than one fragment of the same entity. This guarantees that all fragments of the same entity are fully consistent. The cached fragments are then persisted into database using write-behind.
    The problem now is how to guarantee that all fragments are fully consistent in the database. If we just relly on coherence write-behind mecanism we will have eventual consistency in DB, but in case of multi-server failure the entity may become inconsistent in database, which is a risk we wouldnt like to take.
    Is there any other option/pattern that would allow us to either store all updates done on the entity or no update at all?
    Probably if in the EntryProcessor we identify which entities were updated and if we place them in another persistency queue as a whole, we will be able to achieve this, but this is a kind of tricky workaround that we wouldnt like to use.
    Thanks.

  • Write behind batch behaviour confirmation

    Scenario as follows. Cache is configured with a write behind delay of 10 seconds. An entry is inserted into the cache. For some reason, the db is slow and when the cache store attempts to flush this entry to disk it takes 1 minute. During this period, that same entry is updated in the cache twice, but those updates are > 10 seconds apart).
    Do those last 2 updates get aggregated into one store operation to the cache store? Or will the cache store be asked to store both versions of the entry because the updates were greater than the batch delay apart?
    Thanks

    Hi anonymous user628574,
    The consecutive puts will indeed be coalesced into a single "store" operation.
    Regards,
    Gene

  • How do I write a command in a .batch file written in NOTEPAD to run a powershell command?

    How do I write a command in a .batch file written in NOTEPAD to run a powershell command?
    Example:
    powershell -Command "& {Update-Help;}"
    would this be a correct command to put in a .batch file to update help information in powershell.
    All I want to know is how to write a windows powershell command in a .batch file.
    Multi-Commands
    Single-Commands
    Charles Wright

    Hi,
    You can separate multiple commands with a semicolon (;).
    Don't retire TechNet! -
    (Don't give up yet - 13,085+ strong and growing)

  • Write-Behind, Expiration, and SQL Exceptions.

    Hi Chaps,
    If a cache with write-behind enabled has problems writing to the DB I understand that Coherence will re-queue the objects and write them when the DB is available.
    The problem I have is that (after a DB failure) I don't see them being written - I can see these items in the cache but not in the DB, even several hours after the outage. (Items that were added to the cache after the outage are being written).
    Is there anything the cachestore methods (specifically store() ) need to do with regards to exceptions to ensure that these items are re-qeueued?
    Next question is: I was also wondering how is this managed with regards to expiry?
    We have our own expiry routine which removes items from the cache that are older than 24 hours (this was from before we could expire objects by specifying the timeout in the put() method call, which I am intending to switch to).
    If an item has not been written to the DB due to an outage and is then expired (by our own routine or by Coherence) is it then lost forever, or will it remain in the queue? (seeing as the queue holds references I am guessing not but though I'd check).
    Thanks,
    Randal.

    Jon,
    I have a question related to this...If you remember a few weeks back, I stumbled upon the problem of the "version-persistent" map for the versioned-backing-map-scheme does not accept putAll operations. The workaround until you guys implement it, was to override the putAll method of the cacheStore and throw and unsupported operation exception (to force individual puts).
    Well, although this workaround works, I am getting tons and tons of:
    2006-04-06 17:18:27.347 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): The CacheStore "MyCacheStore@46b9979b" does not support storeAll().
    2006-04-06 17:18:27.348 Tangosol Coherence 3.1/339 <Error> (thread=WriteBehindThread:MyCacheStore, member=1): Failed to store keys="[16, 18, 21, 26, 5, 13, 14, 25, 17, 15, 23, 19, 2, 6, 9, 7]":
    java.lang.UnsupportedOperationException
    at ...MyCacheStore.storeAll(MyCacheStore.java:126)
    at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeAll(ReadWriteBackingMap.java:3820)
    at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:3538)
    at com.tangosol.util.Daemon$1.run(Daemon.java:63)
    2006-04-06 17:18:27.349 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="16"
    2006-04-06 17:18:27.349 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="18"
    2006-04-06 17:18:27.350 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="21"
    2006-04-06 17:18:27.351 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="26"
    the first OperationNotSupported is expected, but I'm not sure what the requeued warnings are all about. These are not failures to the DB...it is something else. (mind you that this happens when trying to load a lot of data into the map.)
    1- Is this requeuing related or the same as in failed DB stores?
    2- Is it possible to "lose" stores if I don't configure the write-requeue-threshold with very, very high values? I must ensure I don't lose anything.
    In a related note, in some circumstances, I need to ensure that the "write queue" is flushed or cleared. For example, I may want to force a flush of all pending stores (and wait/block until that's done).
    I have looked into it and I don't seem to know how to do it. I can read the write-queue length, but I believe that this is not very accurate...since my tests seem to indicate that the write-behind thread may take the entries to store off the write-queue and then deal with them in parallel (which means that there are still entries althought the write-queue size is 0). Also, there are some calls from the cache store that, at first, seem to give some access to the write thread (potentially allowing me to contact the thread to tell him to flush or discard any pending stores)...but I believe that all of the functions are protected...but there may be other ways..
    I guess my second batch of questions are:
    1- How can I effectively force a flush (or clear) of the pending stores. Such that there is no single store pending in any queue (visible or invisible to the programmer).
    2- What is the role of re-queuing in these situations? where is the queue sitting, the thread? the cache store? who's responsible of retrying that, and when?...I would like to flush those entries too.
    A quick explanation of the operation of the write thread would also be very appreciated.
    Thanks!
    Josep M.

  • Write-behind max speed?

    Hi,
    We are trying to test the speed of the write behind mechanism and we would be interested to know how other coherence users handle, for example, writing 1 million rows into the database.
    At the moment, using jdbc batch inserts we can write approximately 30000 rows per minute, which means it would take about 30 minutes to save 1 million rows. Are there any other methods that other coherence user's use that can improve on this?
    Many thanks,

    user738616 wrote:
    Hi,
    This has nothing to do with Coherence as the implementation of CacheStore is outside of Coherence. Apart from JDBC Batch, you should try using PLSQL Bulk binds for such numbers.
    Hope this helps!
    Cheers,
    NJHi NJ,
    we actually measured PLSQL bulk binds against plain SQL (both with JDBC)... for anything which can be translated to plain inserts/updates, plain SQL is way faster (more than 10x).
    You can only win with bulk binds when that statement which you send down actually does more complex logic and multiple statements so you actually win with optimizing away the roundtrips, too.
    Best regards,
    Robert

  • How to limit the number of rows in a smart form , sap script.

    Can anyone tell me how to limit the number of rows in the output of a sap script/smart form. I have tried "protect/endprotect" in sap script but have no idea of how to do in smart form. In sap script the only way it has happened is by reducing the size of the main window.

    Hi,
    In Smartform also, why dont you try reducing the size of the window if you want to limit the rows in it. Also if you are printing Line Items in a LOOP, you can write
    LOOP AT T_ITAB FROM 1 TO N in the LOOP Node if you know how many records exactly you want to display in 1 Page processing.
    regards,
    Mahesh

  • How to limit  dimensions values of querybuilder

    Hi,
    I want to learn how to limit dimension value in query builder. i want, the user who run the querybuilder can not see a dimension value which i do not want to display.
    For example,
    The time dimension has "2000" and "2001" values as default which is defined in database. Normally when i run the querybuilder for create new presentation or calculation, In the dimensions panel of the querybuilder display all values of each of dimension. The user can select values which she/he want from that list.
    But i want to limit values which the user can see in the dimensions panel of the querybuilder. for this time dimension example, i want to hide "2001" value from the user. How can i do it. if there is any sample code, please share.
    Thank you very much for your helps.
    ilknur

    Thanks Thomas, excellent suggestions as always.
    If anybody wants more information regarding Thomas' suggestions please refer to
    Oracle9i OLAP Developer's Guide to the OLAP DML for the AW Permit command.
    In sumamry: To create permission programs, you define two programs with the names permit_read and permit_write. In these programs, you can specify PERMIT commands that grant or restrict access to individual workspace objects. In addition, you write these programs as user-defined functions that return a Boolean value, and the return value indicates to Oracle OLAP whether or not the user has the right to attach the workspace.
    For relational security there is an excellent tutorial on the database pages of OTN, follow this link:
    http://otn.oracle.com/obe/start/index.html
    then follow the links for Oracle9i Database Release 9.2.0.2 - Security - Creating Label-based Access Control. This module describes how to use Oracle Label Security to setup security based on label policies
    These security layers should be transparent to the OLAP metadata layer, therefore, once you have implemented your chosen security method your BI Beans application will only need to connect with the appropriate user to inherit the security layer. For more information see the Security section of the BI Beans Help:
    http://otn.oracle.com/bibeans/903/help/
    BI Beans product Management Team
    Oracle Corporation

  • Write behind cache, DB down, when should the system stop taking new data in

    Hello:
    We are trying to use Coherence for our custom ESB, which is brokering payloads of various size between consumer and provider applications.
    Before Coherence, stopping our DB meant organization-wide outage for critically important business services.
    Since we have at least 40G of RAM in production environment, we believe that our app
    can use Coherence write-behind option for tolerating at least several hours worth of DB outage.
    We are currently using a near cache backed by distributed cache in write-behind mode.
    9 business service JVMs (storage enabled=false) use 30 storage enabled JVMs.
    IMPORTANT: We need to create an automated alerting facility determining when
    amount of unsaved data reaches critical level since DB goes down. This alert should help us decide when our application stops accepting inbound traffic.
    It is hard to use QueueSize parameter for that because our payload memory footprint can vary from 1KB to 3MB.
    We do not expire any entries in order to enable support queries against the cache during DB outage.
    Our experiments with trying various flavors of overflow-scheme resulted in OutOfMemoryError, therefore
    we decided to implement RAM-only cache as a first step.
    <near-scheme>
    <scheme-name>message_payload_scheme</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>limited_entities_front_scheme</scheme-ref>
    <high-units>100</high-units>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>limited_bytes_scheme</scheme-ref>
    <high-units>199229440</high-units>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.comp.MessagePayloadStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    <read-only>false</read-only>
    <write-delay-seconds>3</write-delay-seconds>
    <write-requeue-threshold>2147483646</write-requeue-threshold>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    </back-scheme>
    </near-scheme>
    <local-scheme>
    <scheme-name>limited_entities_front_scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <unit-calculator>FIXED</unit-calculator>
    </local-scheme>
    <local-scheme>
    <scheme-name>limited_bytes_scheme</scheme-name>
    <eviction-policy>HYBRID</eviction-policy>
    <unit-calculator>BINARY</unit-calculator>
    </local-scheme>

    Good info ... I feel like I need to restate my original question along with a couple of new questions caused by the discussion above.
    Q1. Does Coherence evict 'dirty', or 'queued', or 'unsaved' objects for cache configuration provided above?
    The answer should be 'NO', otherwise Coherence is unsafe to use as a system of record,
    it should not just drop unsaved information on the floor.
    Q2. What happens to the front tier of the near+partitioned write behind cache described above when amount of unsaved data exceeds max cache capacity defined via high-units?
    I would expect that map.put starts throwing exceptions: cache storage is full, so it should not accept more data
    Q3. How can I determine a moment when amount of dirty data in bytes(!), not in objects, hits 85% of
    max allowed cache capasity configured in bytes (using high-units param and BINARY calculator).
    'DirtyUnits' counter can probably be built with some lower-level Coherence API. Can we use
    this API?
    Please, understand, that we purchased Coherence for reliability, for making our
    system independent from short DB outages, for keeping our business services up
    and running when DBA need some time for admin operations like rebuilding an index.
    Performance benefits are secondary and are not as obvious for our system which
    uses primary keys only and has a well-tuned co-located Oracle back-end.
    We simply cannot put Coherence to production unless we prove that Coherence
    can reliably hold the data and give us information about approaching crisis
    (the cache full of unsaved data).
    If possible, forward this message to Cameron Purdy,
    who was presenting Coherence to our team several moths ago.
    Thanks,
    Vasili Smaliak
    Applications Architect, Enterprise App Integration
    GMAC ResCap
    [email protected]

  • In MIGO, how to check if the batch numbers are the same.

    Dear all,
    in MIGO, when i try to transfer a material. how can i write some code to check if the batch number of From and Dest are the same?
    Desmond

    closed

  • How we can create the batch file to download the data in from URL in ssis

    hi,
    any one help on one the below requirement.
    i have to create one batch file to download the report(reports is in csv format) from URL...
    requirement should be like this...
    1. we have some reports is there in the URL..
    2.when ever we execute the url, the report should be download and save it to the local folder.
    3. this requirement i have to write in the batch file
    should any one let me know how we can create the batch file for the above requirement.

    Hi Priya.N,
    If you use SQL Server Reporting Services for reporting, you can use Visakh’s suggestion to create a script file which calls Reporting Services Web Service to render a report in CSV format and save a batch file to the destination folder, and then create a
    batch file to run the rs.exe utility which can executes the .rss script file. For more information, please see:
    Report Server Web Service
    ReportExecutionService.Render Method
    rs Utility (rs.exe) (SSRS)
    If you use other reporting tools, it depends on the reporting functionality and this requirement may be not achieved.
    Regards,
    Mike Yin
    TechNet Community Support

  • How to allow write privileges by a process to files?

    Hi there
    In SPARC-Solaris8:
    How to allow write privileges by a process to files?
    Tks in advance,
    C�sar

    The access to files will be controlled by the owner of the files, the owner of the process trying to access the files and the permissions set on the files.
    You can use either user, group or world permissions to limit access to files by processes owned by different users.
    For example, a files with permissions of 777 (rwxrwxrwx) could be written to by every user on the system. A files with 760 (rwxrw----) could be written to by anyone who owned it, or anyone in the same group, but not have general write permissions.
    read man pages on group, chmod, etc to get an understanding of how the permissions and the owner of a process interact.

  • How to limit threads in a Java QuickSort algorithm.

    With the following piece of code, I quickly run into OutOfMemory exceptions, since the code starts creating Threads exponentially. However, after waiting for a long time (for all the exception texts to print out, obviously making it slower than running it with one thread only), the program eventually ends with the correct result. Is there a way to run the code as is, but making sure that no more that x threads are running at a time?
    things I have tried:
    - sleeping the current thread - makes it slower than one-thread.
    - looping blank while Thread.activeCount() > x - doesnt work at all
    - suspending resuming threads based on the activeCount - doesnt work too
    Code:
    import java.util.*;
    import java.io.*;
    public class ThreadedSort extends Thread{
    private ArrayList<Integer> _list;
         public ThreadedSort()
              _list = new ArrayList<Integer>();
         public ThreadedSort(ArrayList<Integer> List)
              _list = List;
         public void run()
              threadQuickSort();
    public void threadQuickSort()
              try{
    if (_list.size() < 2)
    return;
    Integer pivot = new Integer(_list.get(_list.size()-1));
              ArrayList<Integer> left = new ArrayList<Integer>();
              ArrayList<Integer> right = new ArrayList<Integer>();
    //ListIterator iter = arr.listIterator();
              // *** optimize here
    for (int i = 0; i < _list.size()-1; i++) {
                   Integer next = (Integer)_list.get(i);
                   if (next <= pivot)
                        left.add(next);
                   else
                        right.add(next);
              ThreadedSort LeftThread = new ThreadedSort(left);
              ThreadedSort RightThread = new ThreadedSort(right);
    //          System.out.println(Thread.currentThread().getName()+" --> "+Thread.activeCount());
    //          Thread.sleep(4000);
              LeftThread.start();
              RightThread.start();
    //          System.out.println(Thread.currentThread().getName()+" --> "+Thread.activeCount());
              LeftThread.join();
              RightThread.join();
              _list.clear();
              list.addAll(LeftThread.list);
              _list.add(pivot);
              list.addAll(RightThread.list);
              }catch (InterruptedException ie) { ie.printStackTrace(); }
              return;
    public static void main(String[] args) { ///////////////////////////////////////////
              Random rand = new Random();
              ArrayList arr1 = new ArrayList();
              long before_threadedQuickSort, after_threadedQuickSort;
              try{
              FileWriter fout = new FileWriter("out.txt");
              BufferedWriter bw = new BufferedWriter(fout);
              before_populate = System.currentTimeMillis();
              for (int i = 0; i< 2000; i++){//***
                   arr1.add(rand.nextInt(2000000));
              after_populate = System.currentTimeMillis();
              ThreadedSort sorter3 = new ThreadedSort(new ArrayList(arr1));
              before_threadedQuickSort = System.currentTimeMillis();
              sorter3.start();
              sorter3.join();
              after_threadedQuickSort = System.currentTimeMillis();
              bw.write("time for threadedQuickSort: " + (after_threadedQuickSort - before_threadedQuickSort)); bw.newLine();
              bw.newLine();
              arr1.clear();
              bw.close();
              }catch (Exception IOException){};
              System.out.printf("\n\n");
    } //end of main
    } //end of class

    So you dont have to go thru the code:
    Quicksort:
    with one thread only I start the sorting by choosing a pivot from the initial list and then generating a left list (all values less then pivot) and a right list (all values greater then pivot).
    then i recursively call quicksort with the left and right lists.
    with one thread that works fine.
    threadedQuickSort:
    each thread object has a variable _list that keeps its list.
    from the initial thread, i do generate again two lists (left and right) but then i construct two threadable objects with those lists and start them, creating for each list a new thread.
    so each thread will create two new threads.
    for a initial list with up to 2000 rows, that yields to the correct result although much slower than the one-threaded version. maximum number of threads ever used was around 500~600.
    for a list with 20000 rows, my machine chokes at around 2600 threads. (OutOfMemory exception)
    My Processor is a 8 core (each core can run up to two threads) giving me a total of 16 actual threads i can work with.
    So the question is how to limit the threads that are created to 16. There is no method in the Thread class where you can specify the max number of threads in a threadgroup.
    @ jverd:
    I looked at ThreadPoolExecutor, but it looks kinda complicated. I see that the idea is creating a queue of jobs to be executed (which would be the different sublists (left and right lists), and then using the specified number of threads to work with the queued jobs, but im failing to understand how it works. Can you point to a simple example somewhere showing how to declare a ThreadPoolExecutor object and work with it. thanks

Maybe you are looking for