Big Content Access Performance Challenge

Partitioning solution: GPT
+ Window 8.1 is capable to process 64-bit GPT partitions - even those created by parted-program under Linux OS
+ big disks available with full capacity
Formatting solution (at the time being):
1) Window 8.1: NTFS
   + works also under Linux with ntfs-3g installed
   - missing ext4 features such as journaling missing
2) Linux: EXT4 with mkfs.ext4
   + ext4 features such as journaling
Challenge: big data performance is zero with Windows 8.1 Enterprise Evaluation at the moment
Problem 1: big partitions (>2TB): not available!
Problem 2: big files (> 2TB): no access !
Problem 3: big hard disks ( > 2TB): no access!
Notes:
a) 64-bit Linux OS was able to a) create b) access c) operate with ok performance big (>2TB) content/data/files in >4TB disks
b)  " ** not accessible **"  message to be considered because content IS accessible once a 64-bit software is installed 
c)  " ** the volume does not contain a recognized file system ** message should rather be something like
           "sorry, system can not recognize file system, please install/get .... solution"
d) 32-bit system should NOT offer format as an option for with 64-bit big data !
e) NTFS- format should not be offered for big > 2TB data by 32-bit software!
f) it's hard for user to know whether problems are due to 32-bit programs when 32-bit programs itself don't
   recognize the fact that they are processing big data - this challenge is the same to other 32/64-bit hybrid OS systems  
Question: what is the add-on software and where to download to get access to EXT4 disks used by 64-bit OS ?

Thanks for your advice!
More detailed info:
- file size example 3.3TB <=> is 64-bit
- partition size example: 4TB <=> is 64-bit
- hard disk example: 4 TB hd used for enterpise big data apps <=> is 64-bit
- partioned by 64-bit parted software in enterpise Linux
- formatted by 64-bit mkfs.ext4 in enterpise Linux
  where
  - ext4 is fourth extended filesystem is a journaling file system for Linux
  - can support volumes with sizes up to 1 exbibyte (EiB) and files with sizes up to 16 tebibytes (TiB).[
Goal: to explore Windows 8.1 Enteprise Evaluation performance with big data using big data content created by other 64-bit Linux
Questions:
1) do I need to install something to make Windows 8.1 Enteprise Evaluation handle properly big data ?
2) what software does Microsoft recommends for Ext4 ?
3) will the final Windows 8.1 Enteprise Evaluation embed Ext4 ?
4) which tools and apps does Microsoft recommend
   a) to read ext4 format big data for writing it to different hard disk created by Win 8.1 with GPT ?
      - goal is to use 64-bit Linux big data in Windows 8.1 Enteprise Evaluation
   b) to read from hard disk created by Win 8.1 with GPT and to write to ext4 format disk ?
      - goal is to use Windows 8.1 Enteprise Evaluation big data in 64-bit Linux
5) why did Microsoft diskpart give NTFS as only formatting option for big volumes:
> diskpart <-- in command mode
DISKPART > LIST VOLUME
SELECT VOLUME 17 <-- one of volumes
FILESYSTEMS --> Current File System
Type: RAW <-- in Linux system this is EXT4
File Systems supported for formatting: NTFS <--- *** only NTFS where max file size < 2TB***

Similar Messages

  • Getting a StackOverflowError trying to download big content sync zip file

    We've noticed this happens when trying to download a big Content Sync zip file (typically of around 280MB, although recently we saw the same issue with a 250MB file, so we're not sure of the root cause of this).
    The only prerequisite to reproduce this issue is to configure your content sync to have > 250MB of content (in the form of images, html, js, etc).  The steps to reproduce are the following:
    Navigate to the content sync console URL: /libs/cq/contentsync/content/console.html
    Click on the 'Clear Cache' button.
    Click on the 'Update Cache' button.  No problems up to here, content sync cache (under /var/contentsync) is populated with expected assets and files.
    Click on the 'Download Full' button.  Almost immediately, the app crashes with the following stack trace:
    02.05.2013 00:56:14.248 *ERROR* [204.90.11.3 [1367455869892] GET /etc/contentsync/audiusa-retail.zip HTTP/1.1] org.apache.sling.engine.impl.SlingRequestProcessorImpl service: Uncaught Throwable java.lang.StackOverflowError
            at org.apache.commons.collections.map.AbstractReferenceMap.isEqualKey(AbstractReferenceMap.j ava:434)
            at org.apache.commons.collections.map.AbstractHashedMap.getEntry(AbstractHashedMap.java:436)
            at org.apache.commons.collections.map.AbstractReferenceMap.getEntry(AbstractReferenceMap.jav a:405)
            at org.apache.commons.collections.map.AbstractReferenceMap.get(AbstractReferenceMap.java:230 )
            at org.apache.jackrabbit.core.state.ItemStateReferenceCache.retrieve(ItemStateReferenceCache .java:147)
            at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(LocalItemStateManager .java:171)
            at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemStateManager.java: 260)
            at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateMan ager.java:161)
            at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:382)
            at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:328)
            at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:622)
            at org.apache.jackrabbit.core.LazyItemIterator.prefetchNext(LazyItemIterator.java:122)
            at org.apache.jackrabbit.core.LazyItemIterator.<init>(LazyItemIterator.java:104)
            at org.apache.jackrabbit.core.LazyItemIterator.<init>(LazyItemIterator.java:85)
            at org.apache.jackrabbit.core.ItemManager.getChildProperties(ItemManager.java:816)
            at org.apache.jackrabbit.core.NodeImpl$10.perform(NodeImpl.java:2178)
            at org.apache.jackrabbit.core.NodeImpl$10.perform(NodeImpl.java:2174)
            at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
            at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
            at org.apache.jackrabbit.core.NodeImpl.getProperties(NodeImpl.java:2174)
            at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:202)
            at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1697)
            at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:219)
            at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1697)
            at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:219)
            at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1697)
            at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:219)
            at org.apache.jackrabbit.core.NodeImpl.accept(NodeImpl.java:1697)
            at javax.jcr.util.TraversingItemVisitor.visit(TraversingItemVisitor.java:219)
    We've extended the content sync framework by writing 2 ContentUpdateHandler implementations.  Given that the content update process finishes correctly, we dont' think this is the root cause of the problem.
    Any help on this subject is appreciated.
    Thanks,
    -Daniel

    Hi Daniel,
        File daycare with steps to reproduce, logs, thread dump & hs_err file.
    Thanks,
    Sham
    @adobe_sham

  • Tools to measure Content DB performance?

    Dear All,
    Is there a tool to measure content DB performance not size and specially if it is having a lot of sites is it possible to know monitor this specific site performance on the content DB?
    Kind Regards, John Naguib Technical Consultant/Architect MCITP, MCPD, MCTS, MCT, TOGAF 9 Foundation. Please remember to mark the reply as answer if it helps

    Hi John,
    Let's say there is, what would you like to see as the output for the DB performance?
    The way I see it users are accessing SharePoint sites, and you want to know if these are loading fast enough for them. In this case, you can use developer dashboard to see why pages might load slower than others.
    If your overall performance is really bad, try taking a look at the SharePoint components that are responsible for showing the page. Some things to check:
    - Server resources (CPU/RAM/disk space/disk IO) on all SharePoint and Database servers
    - Caching (BLOB, object etc.)
    - Distributed Cache
    Please let me know if you have any additional questions.
    Nico Martens
    SharePoint/Office365/Azure Consultant

  • How to access performance counter directly?

    What I am going to do is programming with hardware performace counters of UltraSPARC T2+ processors in Solaris 10. I programmed a loadable syscall to invoke the hv_niagara_getperf or ultra_getpic, trying to get the counters' data. But there just comes a panic, unresponsive without any text notice, and then reset. I don't know what's the real causes because I am not very familiar with SUN servers.
    I did some experiments both in primary domain and logical domain. In primary domain, both hv_niagara_getperf and ultra_getpic lead to panic, and in logical domain, hv_niagara_getperf return the ENOACCESS status and ultra_getpic lead to panic.
    My question are:
    1, If there is some initialization works or privilege settings to do to prevent panic?
    2, If Logical Domain can be set privileges to access hardware performace counters? (I read the Hypervisor api manuals and it says only one domain can access performance counters. But vcpus is divided and allocated to different domains, every vcpu has its performace counters, how can I do to access their own performace counters in different domains?)
    3, I know there are methods such as libcpc, cpc module, and pcbe module for users or develpers to get performace data. What I am working at is to make some threads performance tuning on UltraSPARC T2+, Is the overhead acceptable to use these library or modules supported by the OS architecture.
    Please help me, indicate my fault and give me some advice or instruction. Thank you very much.

    What I am going to do is programming with hardware performace counters of UltraSPARC T2+ processors in Solaris 10. I programmed a loadable syscall to invoke the hv_niagara_getperf or ultra_getpic, trying to get the counters' data. But there just comes a panic, unresponsive without any text notice, and then reset. I don't know what's the real causes because I am not very familiar with SUN servers.
    I did some experiments both in primary domain and logical domain. In primary domain, both hv_niagara_getperf and ultra_getpic lead to panic, and in logical domain, hv_niagara_getperf return the ENOACCESS status and ultra_getpic lead to panic.
    My question are:
    1, If there is some initialization works or privilege settings to do to prevent panic?
    2, If Logical Domain can be set privileges to access hardware performace counters? (I read the Hypervisor api manuals and it says only one domain can access performance counters. But vcpus is divided and allocated to different domains, every vcpu has its performace counters, how can I do to access their own performace counters in different domains?)
    3, I know there are methods such as libcpc, cpc module, and pcbe module for users or develpers to get performace data. What I am working at is to make some threads performance tuning on UltraSPARC T2+, Is the overhead acceptable to use these library or modules supported by the OS architecture.
    Please help me, indicate my fault and give me some advice or instruction. Thank you very much.

  • Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Ful

    I am trying to resolve this after setting up my new Farm.I am having 2 wfe ,1 sppserver,1 server dedicated for crawl ,1 for search and index  in my farm. I guess dedicated crawl server  is the root cause of the issue,i also did
    disableloopback check settings but still facing the same issue,any solution?
    Please Mark it as answer if this reply helps you in resolving the issue,It will help other users facing similar problem

    Hi Aditya,
    Please refer to the links below and try if they help:
    Add the full read rights to Default Content Access Account of Search Administration via the web application’s user policy.
    http://sharepoint.stackexchange.com/questions/88696/access-is-denied-verify-that-either-the-default-content-access-account-has-acce
    Grant the Default Content Access Account permission in User Profile Service Application
    http://www.sysadminsblog.com/microsoft/sharepoint-search-service-access-is-denied/
    Modify you crawl rule
    http://wingleungchan.blogspot.com/2011/11/access-is-denied-when-crawling-despite.html
    Add crawl servers ip to local host file
    http://wellytonian.com/2012/04/sharepoint-search-crawl-errors-and-fixing-them/
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Retrieve the default content access account for search through code

    Hi there,
           Does anyone have the code to retrieve the default content access account (crawl account) for the MOSS search? I tried looking into Microsoft.Sharepoint.Search.Adminstration.SearchService namespace. It has a "crawlaccount" property but not sure how to initialize it.
    Thanks,
    Kish

    try:
    using Microsoft.Office.Server.Search.Administration;  
    using Microsoft.SharePoint;  
       using (SPSite site = new SPSite("http://basesmcdev2/sites/tester1"))  
                    SearchContext context = SearchContext.GetContext(site);  
                    Content content = new Content(context);  
                    return content.DefaultGatheringAccount;  
    http://www.certdev.com

  • Changing Content.Access.Path into short URL implies errors at CAT2

    Hello,
    After changing the Content.Access.Path to another value due to the note 549610 I see complications and errors on my application cat2, that some buttons and information aren´t showed any more.
    When I do the changes in the Content.Access.Path backwards, the errors in cat2 aren´t there any more.
    I think that solving one problem (changing parameters) has effect to onother application.
    Who can help me, please?
    Thank you in advance!!
    Best regards
    Andreas

    Hi priya,
    Not sure: check syntax in your Update Roules, also at level of start routine.
    Ciao.
    Riccardo.

  • Non Administrators can not access Performance Counters

    I have a problem with the ColdFusion performance counters in
    CF8 - 8,0,1,195765 enterprise version installed on Windows 2003 R2
    Standard Edition SP2. The counters appear for administrators, but
    they are NOT available for Non administrators that are members of
    the "Performance Monitor Users" group. All other perfmon counters
    are visible by these users but not the CF8 ones. I do NOT want to
    give all users Admin access to my servers, just so that they can
    monitor the servers performance! Does anyone else have this
    problem?

    Hi Charlie, yes this is the same post. I have seen this
    technote and have followed the recommendation there but this has
    not fixed the issue. I'm using a 32bit CF and OS. The perfmon
    counters for CF don't seem to work the same way as the other
    perfmon counters do which is NON Administrators must be a member of
    the local "Performance Monitor Users" or "Performance Log Users"
    group do access perform or perform logs.
    Thanks for your reply though!

  • Why increase db env cache not improve random access performance?

    I want to improve bdb random access performance.
    I built a bdb with 3,000,000 records and then test random access on these records. I assign db env cache with 100M, 200M and 500M. With more cache, the cache hit ratio has been increased. However, the overall performance is NOT improved (actually performance is a little poorer with bigger cache). Why does this happen? Any way to improve the performance?
    I plan to change page size later. Any suggestion on the page size setting?

    Below is the db_stat output of 100M cache.
    Thu Feb 25 07:02:37 2010 Local time
    53162 Btree magic number
    9 Btree version number
    Little-endian Byte order
    multiple-databases Flags
    2 Minimum keys per-page
    4096 Underlying database page size
    1007 Overflow key/data size
    4 Number of levels in the tree
    3000000 Number of unique keys in the tree
    3000000 Number of data items in the tree
    546 Number of tree internal pages
    42286 Number of bytes free in tree internal pages (98% ff)
    48388 Number of tree leaf pages
    4939160 Number of bytes free in tree leaf pages (97% ff)
    0 Number of tree duplicate pages
    0 Number of bytes free in tree duplicate pages (0% ff)
    3000000 Number of tree overflow pages
    2259M Number of bytes free in tree overflow pages (81% ff)
    0 Number of empty pages
    0 Number of pages on the free list
    125MB 2KB 24B Total cache size
    1 Number of caches
    1 Maximum number of caches
    125MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    9924074 Requested pages found in the cache (54%)
    8125037 Requested pages not found in the cache
    0 Pages created in the cache
    8125037 Pages read into the cache
    0 Pages written from the cache to the backing file
    8094176 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    30861 Current total page count
    30860 Current clean page count
    1 Current dirty page count
    16381 Number of hash buckets used for page location
    26M Total number of times hash chains searched for a page (26174148)
    132 The longest hash chain searched for a page
    53M Total number of hash chain entries checked for page (53243413)
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    8125049 The number of page allocations
    17M The number of hash buckets examined during allocations (17891415)
    12 The maximum number of hash buckets examined for an allocation
    8094176 The number of pages examined during allocations
    1 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    Pool File: md.bdbxml
    16384 Page size
    0 Requested pages mapped into the process' address space
    131 Requested pages found in the cache (90%)
    14 Requested pages not found in the cache
    0 Pages created in the cache
    14 Pages read into the cache
    0 Pages written from the cache to the backing file
    Pool File: md.bdbdb
    4096 Page size
    0 Requested pages mapped into the process' address space
    9923933 Requested pages found in the cache (54%)
    8125020 Requested pages not found in the cache
    0 Pages created in the cache
    8125020 Pages read into the cache
    0 Pages written from the cache to the backing file
    Pool File: compMd.bdbdb
    4096 Page size
    0 Requested pages mapped into the process' address space
    10 Requested pages found in the cache (76%)
    3 Requested pages not found in the cache
    0 Pages created in the cache
    3 Pages read into the cache
    0 Pages written from the cache to the backing file
    Default locking region information:
    158 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    1000 Maximum number of locks possible
    1000 Maximum number of lockers possible
    1000 Maximum number of lock objects possible
    40 Number of lock object partitions
    14 Number of current locks
    104 Maximum number of locks at any one time
    4 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    20 Number of current lockers
    35 Maximum number of lockers at any one time
    13 Number of current lock objects
    101 Maximum number of lock objects at any one time
    3 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    12M Total number of locks requested (12049088)
    12M Total number of locks released (12049068)
    0 Total number of locks upgraded
    35 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    3 Maximum hash bucket length
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    __db.005 Region name
    0x7fc142d1c000 Original region address
    0x7fc142d1c000 Region address
    0x7fc142d1c138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 3006 allocations, 0 failures, 0 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 3002
    2KB 0
    4KB 1
    8KB 0
    16KB 0
    32KB 2
    64KB 1
    128KB 0
    256KB 0
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    32792 Lock region region mutex [0/28 0% 18243/140468027311840]
    1031 locker table size
    1031 object table size
    824 obj_off
    73640 locker_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    0 0 0 0 0 0 0 0 0
    0 0 1 0 1 0 1 0 1
    0 1 1 1 1 1 1 1 1
    0 0 0 0 0 0 0 0 0
    0 1 1 0 0 0 0 1 1
    0 0 1 0 0 0 0 0 1
    0 1 1 0 0 0 0 1 1
    0 0 1 0 1 0 1 0 0
    0 1 1 0 1 1 1 0 1
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object ---------------
    42 dd= 0 locks held 1 write locks 0 pid/thread 19176/139654678157040
    42 READ 1 HELD md.bdbxml:secondary_ handle 2
    45 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    46 dd= 0 locks held 1 write locks 0 pid/thread 19176/139654678157040
    46 READ 1 HELD md.bdbxml:secondary_ handle 4
    49 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    4a dd= 0 locks held 1 write locks 0 pid/thread 19176/139654678157040
    4a READ 1 HELD md.bdbxml:secondary_ handle 6
    4d dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    4e dd= 0 locks held 1 write locks 0 pid/thread 19176/139654678157040
    4e READ 1 HELD md.bdbxml:secondary_ handle 8
    51 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    52 dd= 0 locks held 1 write locks 0 pid/thread 19176/139654678157040
    52 READ 1 HELD md.bdbxml:secondary_ handle 10
    55 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    56 dd= 0 locks held 2 write locks 0 pid/thread 19176/139654678157040
    56 READ 1 HELD md.bdbxml:secondary_ handle 12
    56 READ 6 HELD md.bdbxml:secondary_ handle 0
    59 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    5d dd= 0 locks held 1 write locks 0 pid/thread 19176/139654678157040
    5d READ 1 HELD md.bdbxml:secondary_ handle 14
    60 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    61 dd= 0 locks held 2 write locks 0 pid/thread 19176/139654678157040
    61 READ 1 HELD md.bdbxml:secondary_ handle 16
    61 READ 2 HELD md.bdbxml:secondary_ handle 0
    64 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    95 dd= 0 locks held 2 write locks 0 pid/thread 19176/139654678157040
    95 READ 1 HELD md.bdbdb:md.db handle 2
    95 READ 1 HELD md.bdbdb:md.db handle 0
    98 dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    99 dd= 0 locks held 2 write locks 0 pid/thread 19176/139654678157040
    99 READ 1 HELD compMd.bdbdb:compMd.bdb handle 2
    99 READ 1 HELD compMd.bdbdb:compMd.bdb handle 0
    9c dd= 0 locks held 0 write locks 0 pid/thread 19176/139654678157040
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    61 READ 1 HELD md.bdbxml:secondary_ handle 16
    4a READ 1 HELD md.bdbxml:secondary_ handle 6
    46 READ 1 HELD md.bdbxml:secondary_ handle 4
    42 READ 1 HELD md.bdbxml:secondary_ handle 2
    56 READ 6 HELD md.bdbxml:secondary_ handle 0
    61 READ 2 HELD md.bdbxml:secondary_ handle 0
    5d READ 1 HELD md.bdbxml:secondary_ handle 14
    56 READ 1 HELD md.bdbxml:secondary_ handle 12
    52 READ 1 HELD md.bdbxml:secondary_ handle 10
    4e READ 1 HELD md.bdbxml:secondary_ handle 8
    95 READ 1 HELD md.bdbdb:md.db handle 0
    95 READ 1 HELD md.bdbdb:md.db handle 2
    99 READ 1 HELD compMd.bdbdb:compMd.bdb handle 0
    99 READ 1 HELD compMd.bdbdb:compMd.bdb handle 2

  • Unique Query Performance Challenge

    Experts,
    Please I need help in this area.  I have a query written from a Multi-Provider. The query is using 98% of its data from 1 base cube. Currently it takes about 4 minutes to run and I want to bring it down to 1 minute.
    This query is run off a web template and it is not static. The users can drilldown in any direction as required. The performance is more of a problem from the drilldown.
    This query is a cost report with a lot of calculated and restricted key figures, and also a lot of excludes and includes all within the key figures.
    The query has 13 restricted key figures and 5 calculated using the restricted, so 18 in all. Each restricted key figure resembles this example:
    •     Cost Element (hierarchy restriction or singles values or ranges restriction)
    •     Sender/Receiver
    •     Version
    •     Value Type
    •     Amount
    I believe the complex restrictions are slowing this report down. At the moment I am trying to speed up this report and it has proved a big challenge.
    Has anybody experienced a similar challenge before?
    Please do not point me to OSS notes or the standard performance documents. I have tried all that. Is there something else beyond those that can help here? Maybe a trick someone has tried?
    Help!!

    Thank you all for replying:
    This Problem is still NOT solved but I have more Information.
    This query Contains a heirarchy (Main CH in row) and a second object contains hierachy also but selected via Authorisation in the User Profile.
    Acutually both hierarchies are selected in Authorisation from the User profile but once the User is in the report, the User can drilldown in the displayed hierarchy depending on thier authorisation.
    But most users are at the highest level in both hierarchies so they drilldown on both hierarchies and this is done via a separate selection Section in the Web template.
    I am trying to build the exact picture of my scenario.....pls any help.
    With this new information, Can I still do the following:
    Buffer Hier or Cache by Hier level???

  • KM Access Performance Issue

    Dear all,
    My web dynpro application is taking a long time to access KM repositories to read its contents. What can be done to improve the performance?
    Thanks,
    Shyam.

    Hi,
    Please refer this guide and it explains How to tune the performance of Knowledge Management like repository cache settings and KM implementation best practices and much more...
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5ea2f0b3-0801-0010-a6b9-d9ce4a01bb77
    Thanks
    Krishna

  • WAAS Cached content access through Checkpoint firewall

              Hello,
    I would like to open access to the cached content on the WAAS from a server through a Checkpoint firewall. The server has to have L3 access to the actual WAE device, from what I understand. Is this feasable? What ports would I need to open in the Checkpoint?
    Thanks
    Doug Bradfield      

    Hello Douglas,
    You're correct, if you see an optimized connection  is probably being cache ( probably not the whole file)  there is a big difference between "cache data" and "preposition data" .
    Cache data is not for you to control or manually retrieve from the WAE box. WAAS controls what is being cache or delete when more new data comes through.
    Preposition data is something you can manually store on the Remote WAE so remote users are benefit of a faster access to files already preposition. But this is uppon remote users request to the server( Users don't know that WAAS exist they just see the  server-share they've always use) so WAAS notice that a user is requesting a file that a remote WAE already got in their preposition files, so it provide faster access to the file.
    Neither of this two options above will let you access WAAS content like you describe on the initial question, you said you want open access to WAE files from a server right ?  you can still get the files on your server and this files can be optimazed if you  server is behind the WAAS optimization path, but you'd need to go and from the server copy the files one by one just like if you were retrieving them from a  client PC.
    hope this helps!

  • Regarding Internal table and access performance

    hey guys.
    In my report , Somehow i reduced the query performance time by selecting minimum key fields and moved the selected records to internal table.
    Now from this internal table i am restricting the loop
    as per my requirements using where statements.(believing that internal table retrieval is more faster than database acces(using query)).
    But still my performance goes down.
    Could you pls suggest me how to reduce the execution time
    in abap programming.
    I used below commands.
    Read using binary search.
    loop ...where statement.
    perform statements.
    collect staements.
    delete itab.(delete duplicates staements too)
    sort itab(sorting).
    For each above statements do we have any faster way to retrieval records.
    If i see my bottle neck at se30.it shows
    ABAP programming to 70 percent
    database access to 20 percent
    R3 system as 10percent.
    now how to reduce this abap process.
    could you pls reply.
    ambichan.
    ambichan.

    Hello Ambichan,
    It is difficult to suggest the improvements without looking at the actual code that you are running. However, I can give you some general information.
    1. READ using the BINARY SEARCH addition.
    This is indeed a good way of doing a READ. But have you made sure that the internal table is <i>sorted by the required fields</i> before you use this statement ?
    2. LOOP...WHERE statement.
    This is also a good way to avoid looping through unnecessary entries. But further improvement can certainly be achieved if you use FIELD-SYMBOLS.
    LOOP AT ITAB INTO <FIELD_SYMBOL_OF_THE_SAME_LINE-TYPE_AS_ITAB>.
    ENDLOOP.
    3. PERFORM statements.
    A perform statement can not be optimized. what matters is the code that you write inside the FORM (or a subroutine).
    4. COLLECT statements.
    I trust you have used the COLLECT statement to simplify the logic. Let that be as it is. The code is more readable and elegant.
    The COLLECT statement is somewhat performance intensive. It takes more time with a normal internal table (STANDARD). See if you can use an internal table of type  SORTED. Even better, you can use a HASHED internal table.
    5. DELETE itab.(delete duplicates staements too)
    If you are making sure that you are deleting several entries based on a condition, then this should be okay. You cannot avoid using the DELETE statement if your functionality requires you to do so.
    Also, before deleting the DUPLICATES, ensure that the internal table is sorted.
    6. SORT statement.
    It depends on how many entries there are in the internal table. If you are using most of the above points on the same internal table, then it is better that you define your internal table to be of type SORTED. That way, inserting entries will take a little more time (to ensure that the table is always sorted), but alll the other operations are going to be much faster.
    Get back to me if you need further assistance.
    Regards,
    <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=zwcc%2fwm4ups%3d">anand Mandalika</a>.

  • TREX indexing on content server performance

    Hi guys,
    Our Portal is integrated with SAP CRM (using webdav) that manages documents stored in SAP content server. We use TREX to index these documents such that users in Portal can search for these documents. Currently we're evaluating the performance of indexing and searching, thus if we have a heavy load of documents to index, would it affect the SAP CRM/Content server that is the document repository? (such as memory consumption, performance, etc..)
    Thanks,
    ZM

    Hi Chris,
    do you use the ContentServer in the DMS application? If yes, you need to index documents stored in the DMS_PCD1 docu category.
    Regards,
    Mikhail

  • Big Library:Slow Performance?

    I have about 10,000 photos (.jpg ~3MB each) in my Aperture Library. I want to add many more photos but I'm worried about slow performance. My questions:
    1. How big is your library or how big do you think the libraries can get?
    2. Have you noticed slower performance with larger libraries?
    3. Any opinion on breaking up into multiple smaller libraries vs. 1 larger library?

    I am running two libraries,
    one for all of my work related imagery, 15,000+ images 50/50 raw & jpegs and
    the other for all of my stuff I shoot of sporting clubs, the bit I give back to the community, 18,000+ predominantly jpeg's
    both run smoothly, one on the MacPro and the other on the G4 laptop.
    Issue starts to be the backing up, if you are thinking it will get BIG, try a library for each client. Could be a good selling point as well, "your imagery is isolated from other clients and has its own dedicated backup".
    Tony

Maybe you are looking for

  • V4.1 "You cannot perform this action in this region of the page" Bug?

    I am receiving an "You cannot perform this action in this region of the page" error when I try to add a link to an image. And yes I am in an editable region. The weird thing is, for the images that already have links, I can edit them successfully, bu

  • User exit EXIT_SAPMV45A_002

    hello friends, I am SD consultant.Can any help me, i was doing the topic of user exit for pre-defined sold to party using v45a0002 in cmod and i wrote a program in include ZXVVZU04 while activating i got error msg 'The last statement is not complete

  • How do I enable text drag and drop in Mail?

    Simple enough question. I'm sure I had this at one time in my Mail app, but it doesn't work now, and I see nothing in Preferences to enable it. Can anyone tell me what I'm missing? Rob

  • Screen Splitting Error

    Hi everyone, I often receive this error, it often occurs after running an app or utilising BlackBerry Hub after a screen rotation and the only way to solve this is by turn the device off and then on again. Please see the attached picture for a better

  • Restore my hard drive

    So my hard drive took a dive and I had the local mac certified shop put a new drive in my aging macbook pro and try to get all the information restored from the old drive to the new one. They told me they got everything, but the applications, which i