LVM questions regarding performance

Over the holidays, I'll be redoing my server (running Arch) using LVM.  The situation is this: I have two internal HDDs (80GB apiece) on the IDE bus, and an external USB2.0 drive sitting around not being used. The current install is also partitioned in such a way that there are some partitions that are nearly empty. So naturally I want to put things into a big volume group so that I'm not wasting space, full well knowing that there may be a slight performance hit (that I can justifiy).
I'll probably do it this way, using /boot because of the legacy GRUB limitation, with swaps outside of the LVM to make sure they're contiguous:
/dev/sda1     /boot
/dev/sda2     swap 1
/dev/sda3     LVM partition #1
/dev/sdb1     swap 2
/dev/sdb2     LVM partition #2
/dev/sdc1     LVM partition #3
Anyway, my questions are these:
1) Do certain filesystems perform better when housed on volume groups?
2) As the two internal IDE drives are faster than the external, I would like to "prioritize" those two logical volume partitions. Is it possible to do this?
I know they seem like simple questions. I've searched, but can't come up with anything searching the obvious search strings. So sorry if I'm overlooking something.

thanks for your fast reply!
i moved from a replicated to a distributed cache:
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>example-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>example-distributed</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>unlimited-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<!--
Backing map scheme definition used by all the caches that do
not require any eviction policies
-->
<class-scheme>
<scheme-name>unlimited-backing-map</scheme-name>
<class-name>com.tangosol.util.SafeHashMap</class-name>
<init-params></init-params>
</class-scheme>
</caching-schemes>
</cache-config>
everything is now much faster, which is fine.
however, if i try to put 200k instead of 100k objects in the cache (i do this in putAll batches of 10k each) i get a java.lang.OutOfMemoryError: Java heap space.
as you can see the object has 2 ints only ... so the raw data of 200k objects is 1,6mb only.
any suggestions?
update:
i increased the java vm heap memory to 1gb, now it works.
however, i notice that for each 100k objects (2 integers each) jave consumes ~80mb heap memory. at about 700mb heap memory the gc kicks in.
how can this be opimized? 8 bytes of payload is not that much, i doubt that i'm able to put REAL data records into the cache this way.
thanks!
Message was edited by:
druid99

Similar Messages

  • ThinkPad Twist s230u important questions regarding performance

    Hello everyone,
    So, I recently bought a ThinkPad Twist (specifically the 33472GU, even though outlet said 3347XF4) from the outlet to upgrade the laptop that I take to class and I have to make a few questions that I hope get answered. I haven't really seen much attention paid to this particular device and it saddens me to see the lack of attention that it has seen.
    In particular, I would have liked to see more response and helpfulness from Lenovo staff to address these concerns, but if anyone else has input, please let me know. Here is my list:
    1.) It is true when they say the battery doesn't last long. I can barely squeeze 3/4 hours out of it depending on what I am doing. This is considering that I turned many of the settings down and even turned off things like bluetooth and windows 8 effects. How can Lenovo advertise 6/7 hours on this thing? No, my battery is not dead, in fact, it came with 46 mWh charge, better than what it was designed for (42 mWh). So I don't want to get sent to technical support to "fix" it. I want an honest answer and indication from staff that this issue is being taken seriously and addressed.
    2.) With the above being said, is there a definitive answer from anyone about how I can extend my battery life to at least 5 hours without crippling the machine? I'm not willing to turn EVERYTHING off. That defeats the purpose of having bought the twist in the first place.
    3.) Does anyone know when the Haswell version of the device is coming out? Not that I really have a choice because I already have this one. I just want to know that I didn't buy it weeks before a release so I don't feel cheated.
    Otherwise, it's a great unit and concept in general. I was considering the x230t because it has everything and more (including battery life). But unfortunately it's a little overkill, much more expensive, and I couldn't find many configurations on the outlet (probably because nobody wants to return theirs).
    I would appreciate any and all helpful recommendations/explanations (particularly honest ones) and I thank you for reading this.

    I think our estimate of 3-4 hours of battery life is closer to real world if you are actively using the computer.
    From the many forum posts, I have come to believe that the only way to exceed 5 hours is to replace the hard drive with an SSD.  I have gotten such good performance after reinstalling from scratch and getting the msata working that it is hard for me to justify the $200 for extra battery life and the annoying pauses when the harddrive shock sensor is triggered.
    For me, this was a "you get what you paid for" situation.  I would have liked 8G of RAM and the i7 processor, but the i5 is acceptable.  The only way I think you will be satisfied is if you go SSD instead of the HD.  Good luck.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from 2007 it came preloaded with tiger. I have original install tiger discs version 10.4.10. Is it safe to downgrade or not please help

    Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from Sep 2007 it came preloaded with tiger. I have original install (2) tiger discs version 10.4.10.  I want to know if it is safe and what are the necessary steps to do so. Also by downgrading im wondering if a lot of apps nowadays support tiger for example I have photoshop version 5 and 4 these are very important to me. One last question does anyone know of any reliable virus protection for mac that doesnt slow down your computer? because I have read that a lot of them do so. If anyone can help me I would greatly appreciate it! Here are the specs for my iMac 
    Model Name:
    iMac
      Model Identifier:
    iMac7,1
      Processor Name:
    Intel Core 2 Duo
      Processor Speed:
    2 GHz
      Number Of Processors:
    1
      Total Number Of Cores:
    2
      L2 Cache:
    4 MB
      Memory:
    2 GB
      Bus Speed:
    800 MHz

    Most of the time a perception of general slow performance is the result of installing third party junk alleged to speed up, "clean" or "optimize" your Mac, or to look for viruses that don't exist. Ideally you would know what you installed so you can uninstall it, but if you don't know or aren't sure there are techniques such as Safe Mode and creating a temporary user account to confirm that suspicion.
    If you open Activity Monitor it may show a process, or processes, that occupy a lot of your system's time.
    Slowness confined solely to web browser activity is often the result of an inexorable progress toward websites that demand ever more processor-intensive tasks. If your slow performance is strictly limited to web browsing, you might try disabling Flash by either uninstalling it, or use utilities such as ClickToFlash that allow you to control what Flash content gets loaded. Flash in itself is not inherently evil, but there is nothing to stop websites or the advertisers who pay for them from writing horrible Flash code that can do everything from hogging 100% of your CPU's time to causing random crashes. You can watch Activity Monitor as in the above to correlate these troublesome web pages with performance degradation.
    You are correct; if your computer shipped with Tiger you may certainly revert to it. I forgot that Tiger was shipping on new Macs as recently as five years ago. To downgrade it would be necessary to completely erase your hard disk and boot with the Tiger installation DVD, followed by installing it anew. Such drastic measures are not necessary and you are unlikely to be satisfied with the results anyway.
    Assuming your system is free of third party parasitic junk attached to OS X in an ill-conceived attempt to improve upon it, that your hard disk drive is sound and the boot volume has enough free space to work with, by far the best performance-enhancing improvement would be to add more memory. Buy as much as your computer can use and that you can afford. 2 GB is not that much any more.
    Read the following for some recommended troubleshooting techniques from Apple:
    General purpose Mac troubleshooting guide: Isolating issues in Mac OS X
    Creating a temporary user to isolate user-specific problems: Isolating an issue by using another user account
    Memory limitations: Using Activity Monitor to read System Memory and determine how much RAM is being used
    Identifying resource hogs and other tips: Runaway applications can shorten battery runtime
    Starting the computer in "safe mode": Mac OS X: What is Safe Boot, Safe Mode?

  • Regarding performance impact if I do DB accessing coding in comp Controller

    Hi ,
    This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
    I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
    The architecture is aas below
    WEBDYNPRO    >>   JAVA Classes object( generated by the JAVA- COM bridge   tool )                         >>   JAVA-COM bridge  tool >> VB COM+ Component   >> SQL server.
    The issue
       Performance :-   first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
    Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
    The causing exception is nested.
    [EXCEPTION]
    com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
    ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
    Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
    My questions regarding
    What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
    Please address  this with respedct to performance  point of view .
    thanks
    pkiran

    Hi Both,
    Thanks for the reply.
    Yes they are closed and set it to null;
    Connection max and mini properties are controlled at COM+ components in VB.
    Since I am using COM - JAVA bridge,  I am just invoking the methods defined ijn the VB code  thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
    My question is
    if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
    regards
    pkiran

  • Question regarding ASO application in Essbase 11 version

    Hi All,
    Thanks for the replies to my previous posts.
    I have a question regarding the ASO applications for telecom company built in Essbase 11 version. Please provide your feedback on the design.
    The ASO application has the following number of Dimensions:
    Dimensions     Number of Levels     Number of Level 0 Members     Number of Attribute Dimensions
    Dimension1     2     6.5 million     15
    Dimension2 1     3     
    Dimension3     1     4     
    Dimension4 1      6     
    Dimension5 1     6     
    Dimension6     1     5     
    Dimension7 1     3     
    Dimension8     5     1700     
    Dimension9     2     800     
    Dimension10 2     40000     
    Dimension11 3     750     
    Dimension12 2     34000     
    Dimension13 1      15     
    The number of Measures is 8.
    The outline size is around 2.12 GB.
    The data is mostly sparse. Does this design yield a good performance. Should I change some of the attributes to UDAs to increase the performance.I think Attribute dimensions are more flexible than UDAs but it affects the performance of the retrieval.
    Thanks in advance.
    Kannan.

    In ASO attribute dimensions are treated like regular dimensions, That is to say they are materalized just like a regular dimension. Changing them to UDA's won't buy you the same performance as having them as dimensions will. The one nice thing about Attributes in ASO cubes (and BSO cubes) is you don't clutter us the screen with dimensions that are not used a lot. If your attrubite dimensions are used often in your ASO cube, there would be no performance difference if you made them regular dimensions.

  • Question regarding Calculated Key Figures in BEx and their impact on SQL

    Hello,
    I am new to BO SAP integration. I have a question regarding using CKF in BEx.
    I created universe off of a BEx query with no CKF. I then created a Webi report with come dimensions and measures. I captured the SQL generated using trace (ST05).
    In the same BEx, I then create a CKF. Then refreshed the universe and created a new Webi report using the same dimensions and the CKF. The SQL generated had many more select statements.
    My question is what is the effect of CKF on the generated SQL and is there a performance issues using CKF in BEx as opposed to creating variables in Webi report?
    Thanks,
    Nikhil

    Hi,
    if your CKF will have always same unit and you have one KF in you inforpovider with this unit, you can try to do this trick
    create a new hidden CKF as new CKF = KF / KF (with this equale new CKF = 1 unit)
    change your old CKF as old CKF = old CKF * new CKF
    let me know if it works.

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Question Regarding Mesh with 3702 and non AC ap´s

    Hello! 
    quick question regarding MESH deployments with 2 different sorts of AP´s: AC and non-AC modells: If my 3702i is my root AP´s, and 3602i my MAP - will AC still work in 80Mhz, or will I have to switch to 40mhz (and thus crippling (???) AC performance?) 
    Not 100% sure on this... I *think* it should still work for the normal 802.11n connection, but I´m not sure if the 80mhz channel width (needed??) for AC, will cause the non-ac 3602i to be stranded? 
    Thanks alot for your insight! 

    Currently, my network DHCP server is a software based DHCP server. In reading over your post if I understood correctly it sounds like the managed switch would have its own hardware based DHCP server to assign IP addresses to those clients identified on the "external" VLAN. Did I understand that correctly or did misread something?
    DHCP server will be software based, even though you defined it on your switch, it is DHCP service running on its OS.
    I am configuring this setup for a small business application and will need to purchase a managed switch with 16 or 24 ports. Do you have any recommendations on a particular managed switch that will handle the VLAN configuration and include POE while keeping costs in mind.
    In this forum, most of us discussed about Cisco enterprise grade wireless. Here is 2960X series switch detail, if you are interested
    http://www.cisco.com/c/en/us/products/switches/catalyst-2960-x-series-switches/index.html
    You may need to check the pricing with your Cisco account manager or from a Cisco partner.
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • A question regarding deallocating arrays

    hello. just to check if i learned arrays correct..
    for example i have an array 'anArray[ ]'
    public String[ ] anArray;
    -> to deallocate anArray, i simply used:
    anArray = null; instead of performing a for loop maybe..
    is this correct?
    it's quite confusing. does it apply to multidimensional arrays? for example anArray[ ] [ ] = null???

    A question regarding eclipse.
    No offence meant.People are not usually offended by Eclipse.
    I have added a jar file to the current project in its
    build path.
    This executes the concerned class file which is in
    the jar file.
    If I remove the jar file from the build path and run
    the project,
    it still executes that class file.
    I tried building a clean project and running it,but
    still executes the jar file and the class.
    Any idea why this is happening?You have that class file elsewhere
    or you have confused your problem statement.
    Why don't you create a new project and add everything except that class/jar (whichever yu mean)

  • Question regarding the installation of a J2EE 6.40 Add-in

    Hi all,
    I would like to install a J2EE engine on a test instance of ECC 5.0 and have a few questions regarding the installation...
    Do I have to use the MASTER CD to first install the J2EE engine (Support Package 0) and then apply the latest support packages found on the SAP Marketplace?
    Or should be able to directly install the J2EE Add-In by using the latest support packages found on the SAP Marketplace?
    Best regards,
    Xavier Vermaut

    Thanks Bhavik for your reply,
    That's what I actually thought but I get the following problem... Here's what I wrote into my customer message... I am still waiting for an answer and would like to get this solved ASAP
    Dear SAP,
    We would like to install the J2EE 6.40 Add-In on our ECC 5.0 instance
    (TST) but get the following error message at the very beginning of the
    installation
    > Cannot find an installed ABAP system, which is a prerequisite for a
    > J2EE Add-In installation. The installation cannot continue.
    We checked the installation logs (sapinst_dev.log) and found the
    following :
    > Found these instances:
    > sid: MGR, number: 00, name: DVEBMGS00, host: erpqs1a
    > sid: TST, number: 10, name: DVEBMGS10, host: erpqs1a
    Why does the installation say that it can not find any ABAP systems when
    having previously found the 2 different instances running on this
    server?
    Would this problem be related to the fact that we have two instances on
    this server?
    Please find hereunder the way we performed this installation :
    01) Download of the 4 different parts of SAP J2EE Engine 6.40 SP 10
         (Solaris 10 - Oracle)
         Part I   : SAPINST10_0-20000121.SAR         (Solaris 64)
         Part II  : CTRLORA10_0-20000121.SAR         (Solaris 64)
         Part III : J2EERTOS10_0-20000121.SAR        (Solaris 64)
         Part IV  : J2EERT10_0-10001982.SAR          (OS Independant)
    02) Extract these 4 archives into /install/J2EE_640
    03) Check Java Version and Environment Variables
    04) Check Solaris Pre-Requisites
    05) Adapt "product.xml" as specified in OSS Note 697535 (IGS)
    06) Log in as 'root'
    07) Set DISPLAY environment Variable
    08) Move to the Installation directory
          ( /install/J2EE_640/SAPINST-CD/SAPINST/UNIX/SUNOS_64 )
    09) ./sapinst
    10) In the 'Welcome to Netweaver Installation' screen, select
          => Dialog Instance Finalization
    Any idea how to get this solved?
    Best regards,
    Xavier Vermaut
    Message was edited by: Xavier Vermaut

  • Beginner question regarding 32-bit mixing and mixdown workflow

    Hello
    I have a beginner question regarding 32-bit mixing and mixdown.
    If I edit some 16Bit, 44.1kHz Stereo WAV Files and put them into multi-track view to do crossfades, how should I do the mixdown?
    Audition shows me in multi-track view, that it is doing 32-Bit mixing.
    Can I just mixdown to 16Bit, 44.1kHz Stereo without any dithering (as the files are 16Bit, 44.1kHz Stereo to begin with), or will I lose quality that way?
    I will be performing a normalization to 96% to the mixdown and then split to tracks in Audition, as in the end I want to to have an audio CD.
    I guess I could mixdown to 32Bit, then normalize and in the end save back to 16Bit, 44.1kHz Stereo WAV (with dithering, I suppose?), but I want to avoid any unnecessary converting steps.
    Greetings

    Any time you do any processing on a 16bit file in 16 bit only it will degrade the audio slightly due to rounding of the calculations. Working in 32 bit floating point (Audition's default) takes account of all bits generated due to processing.
    So it is always best to work in 32 bit, even if your originals were 16, all the way through until the last stage of saving the files for CD burning. Any losses due to conversion will be insignificant against those due to working 16 bit.

  • Questions regarding hard disk replacement in MacBook Pro Early 2008 Edition

    Hello -
    You folks probably get this a lot but I have a MacBook Pro Early 2008 edition and would like to upgrade the base 200 GB 5400 RPM drive to a Seagate Momentus 7200.4 drive. I have a few questions regarding this, and would appreciate any feedback.
    1) Does anybody know if installing a new hard drive in a MacBook Pro would cause the warranty to be invalidated? I don't believe a hard drive is considered a user replaceable part, at least from what I read, and I would like to know if replacing it myself would void AppleCare.
    2) Does anybody have any experience with the Segate Montentus 7200.4? I'm interested in how reliable the drive is and how much heat it generates. I've quickly discovered that I can use my MacBook Pro as an oven in tight spots and would like to prevent it from cooking its own internals.
    3 random) Does anybody know at what temperature the system actually overheats? I'm seeing 171 F and this is with smcFanControl turned up to the Higher RPM mode. I'm a little nervous when I'm running WoW and the system is averaging above 170 with smcFanControl totally cranked up to 6000 rpm.
    Thanks everyone!

    Thanks for that. I've had mixed results with both Seagate and WD, but less issues historically with WD. My experience with IBM/Hitachi makes is nearly impossible to recommend their products, so based on what I'm seeing the Western Digital Scorpio Black looks like the best bet. Do you happen to know if Apple will do hardware installations outside of warranty repair scenario's? Ripping open my only notebook to perform "involved" hardware maintenance is not something I'd prefer to do, even with my technical background.

Maybe you are looking for