Large Memory Size (:O )

i created an application that uses 35 MB of memory using the fianal release J2SE 1.4.0
i'm using swing & JAXP "DOM" to parse and save
a 25KB xml file.
my question is:
how can i analyze the memory of my program, in terms of JVM & Application & Objects of Application(swing, JAXP, EJB......), to focus on one part
of it, to reduce the memory size, or at least
know if it is the dcesion of JVM to allocate that memory size.
any suggesstions, articles, URLs will really help
thanks.

i have the same condition also but i found those results:
a. JVM takes about 10 MB of memory.
b. GUI swing about 5-8 MB
c. JAXP takes about 15-20 MB of memory for a 30 KB XML file
which nearly causes a total consumed memory of 25-30 MB of memory. weard results, right.
b.r
mnmmm

Similar Messages

  • Large memory size

    i created an application that uses 35 MB of memory using the fianal release J2SE 1.4.0
    i'm using swing & JAXP "DOM" to parse and save
    a 25KB xml file.
    my question is:
    how can i analyze the memory of my program, in terms of JVM & Application & Objects of Application(swing, JAXP, EJB......), to focus on one part
    of it, to reduce the memory size, or at least
    know if it is the dcesion of JVM to allocate that memory size.
    any suggesstions, articles, URLs will really help
    thanks.

    you can use a profiler, which shows the allocated/created objects...
    e.g.: on the following sites are demos available
    http://www.optimizeit.com/
    http://www.sitraka.com/software/jprobe/

  • Very large memory size in process window

    I am new to the Mac OS X and in Windows, win explorer never takes any more than 100mb of memory in my processes window, why is it that when i am only surfing one site or no sites but safari is running on the dock, that my memory size is >256mb?! That is just a little to much memory for one program to be taking up considering I only have 512mb.

    Hello Applesr4eva:
    Welcome to Apple discussions.
    This is a very good knowledge base article that discusses OS X memory utilization:
    http://docs.info.apple.com/article.html?artnum=107918
    Don't compare Windows with OS X. OS X uses sophisticated algorithms to manage both real and virtual memory resources. My advice to you would be to forget it unless you are having performance problems. I am running a G4 with 512 MB and it does not breathe hard unless I use processor intensive applications (rare for me in my home computing environment).
    Barry

  • "Fatal error: Allowed memory size of 33554432 bytes exhausted" I get this error message when I try to load very large threads at a debate site. What to do?

    "Fatal error: Allowed memory size of 33554432 bytes exhausted"
    I get this error message whenever I try to access very large threads at my favorite debate site using Firefox vs. 4 or 5 on my desktop or laptop computers. I do not get the error using IE8.
    The only fixes I have been able to find are for servers that have Wordpress and php.ini files.

    It works, thanks

  • Query is allocating too large memory Error ( 4GB) in Essbase 11.1.2

    Hi All,
    Currently we are preparing dashboards in OBIEE from the Hyperion Essbase ASO (11.1.2) Cubes.When are trying to retrieve data with more attributes we are facing the below error
    "Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)"
    Currently we have data file size less than 2GB so we are using "Pending Cache Size=64MB".
    Please let me know which memory I have to increase to resolve this issue
    Thanks,
    SatyaB

    Hi,
    Do you have any dynamic hierarchies? What is the size of the data set?
    Thanks,
    Nathan

  • Large page sizes on Solaris 9

    I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
    # uname -a
    SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
    I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
    and
    "Taming Your Emu to Improve Application Performance (February 2004)"
    http://www.sun.com/blueprints/0204/817-5489.pdf
    The machine claims it supports 4M page sizes:
    # pagesize -a
    8192
    65536
    524288
    4194304
    I've written a very simple program:
    main()
    int sz = 10*1024*1024;
    int x = (int)malloc(sz);
    print_info((void**)&x, 1);
    while (1) {
    int i = 0;
    while (i < (sz/sizeof(int))) {
    x[i++]++;
    I run it specifying a 4M heap size:
    # ppgsz -o heap=4M ./malloc_and_sleep
    address 0x21260 is backed by physical page 0x300f5260 of size 8192
    pmap also shows it has an 8K page:
    pmap -sx `pgrep malloc` | more
    10394: ./malloc_and_sleep
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- malloc_and_sleep
    00020000 8 8 8 - 8K rwx-- malloc_and_sleep
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 6288 6288 6288 - 8K rwx-- [ heap ]
    (The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
    I'm running this as root.
    In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
    Here's the output from sysinfo:
    General Information
    Host Name is machinename.lucent.com
    Host Aliases is loghost
    Host Address(es) is xxxxxxxx
    Host ID is xxxxxxxxx
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Manufacturer is Sun (Sun Microsystems)
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    System Model is Blade 1000
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    ROM Version is OBP 4.10.11 2003/09/25 11:53
    Number of CPUs is 2
    CPU Type is sparc
    App Architecture is sparc
    Kernel Architecture is sun4u
    OS Name is SunOS
    OS Version is 5.9
    Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Kernel Information
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    SysConf Information
    Max combined size of argv[] and envp[] is 1048320
    Max processes allowed to any UID is 29995
    Clock ticks per second is 100
    Max simultaneous groups per user is 16
    Max open files per process is 256
    System memory page size is 8192
    Job control supported is TRUE
    Savid ids (seteuid()) supported is TRUE
    Version of POSIX.1 standard supported is 199506
    Version of the X/Open standard supported is 3
    Max log name is 8
    Max password length is 8
    Number of processors (CPUs) configured is 2
    Number of processors (CPUs) online is 2
    Total number of pages of physical memory is 262144
    Number of pages of physical memory not currently in use is 4368
    Max number of I/O operations in single list I/O call is 4096
    Max amount a process can decrease its async I/O priority level is 0
    Max number of timer expiration overruns is 2147483647
    Max number of open message queue descriptors per process is 32
    Max number of message priorities supported is 32
    Max number of realtime signals is 8
    Max number of semaphores per process is 2147483647
    Max value a semaphore may have is 2147483647
    Max number of queued signals per process is 32
    Max number of timers per process is 32
    Supports asyncronous I/O is TRUE
    Supports File Synchronization is TRUE
    Supports memory mapped files is TRUE
    Supports process memory locking is TRUE
    Supports range memory locking is TRUE
    Supports memory protection is TRUE
    Supports message passing is TRUE
    Supports process scheduling is TRUE
    Supports realtime signals is TRUE
    Supports semaphores is TRUE
    Supports shared memory objects is TRUE
    Supports syncronized I/O is TRUE
    Supports timers is TRUE
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Device Information
    SUNW,Sun-Blade-1000
    cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    Does anyone have any idea as to what the problem might be?
    Thanks in advance.
    Mike

    I ran your program on Solaris 10 (yet to be released) and it works.
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- mm
    00020000 8 8 8 - 8K rwx-- mm
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 8192 8192 8192 - 4M rwx-- [ heap ]
    I think you don't this patch for Solaris 9
    i386 114433-03
    sparc 113471-04
    Let me know if you encounter problem even after installing this patch.
    Saurabh Mishra

  • Jdbc sender channel memory size issue

    Hi Experts,
    We are facing the below error in connect to jdbc sender channel from XI . Could anyone of you suggest me the right action to be taken in this regard.
    Database-level error reported by JDBC driver while executing statement 'select * from*********where posted = '0''. The JDBC driver returned the following error message: 'com.microsoft.sqlserver.jdbc.SQLServerException: The system is out of memory. Use server side cursors for large result sets:Java heap space. Result set size:614,143,127. JVM total memory size:1,031,798,784.'. For details, contact your database server vendor.
    Appreciate your quick help.
    Thanks & Regards,
    Ranganath.

    Regulate the number of records returned by Select statement for permanent fix.
    Few more things....
    a)If you want to select only some fields, then dont use Select * from table.  Rather specify select a,b,c from tablename
    b) specify some flag in the table so that read some records each time and in the update statement update the flag for those records already read. So that you will not reread the same records in the next message.
    Hope you understand.
    c) Increasing java heap size is temporary fix.

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

  • Problems printing to large paper size

    Hi,
    Using Acrobat 9
    In my work, I use different programs for floorplans and renderings. I send to my Customers the documents as PDF file , and usually the scaled floorplans are in  large paper size (22x34 or up)
    With one of the CAD programs, all works fine with paper size up to size C,  but if I choose, for example, Ansi D, I see a screen "Creating Adobe PDF" but nothing happens, the bar doesn't move and the file is not printed / saved
    I'd appreciate any idea to solve this issue
    TIA

    There are a couple of things you can try; 1.) see if you have enough system memory to process the page ( it could be running out of system memory ); 2.) set an invisible holding line to the page / document and lock it ( no stroke, no fill @ 13 x 19 ); 3.)  Use the Page tool ( if you haven't already ) and set it to the document, edge-to-edge.  Also, check you ink supplies, there may be an empty cartridge preventing you from printing the entire file.  Hope this helps.

  • Oracle 10gR2 LARGE PAGE SIZE on Windows 2008 x64 SP2

    Hello Oracle Experts,
    What are the advantages of Large Page Size and how would I know when my DB will benefit from Large Page Sizes?
    My undeqrstanding is on Windows x64 – 8kb default page size – will now be 2 MB. Will this speed up accesses to buffer cache? If so is there a latch wait that I can monitor before vs. after to verify that large page size has improved performance?
    My Database server has 256GB RAM and SGA is set to 180GB. I am quite sure the overhead involved in maintaining a large number of 8kb allocations (as opposed to 2MB) must be high - how can i monitor this?
    I am planning to follow the procedure here:
    http://download.oracle.com/docs/html/B13831_01/ap_64bit.htm#CHDGFJJD
    The DB is for SAP on a 8CPU/48 core IBM x3950. For some reason SAP documentation does not mention anything about this registry setting or even if Large Page Size is supported in the SAP world.
    Part 2 : I notice that more recent Oracle patch sets (example 25) turn NUMA OFF by default. Why is this and what is the impact of disabling NUMA on a system like x3950 (which is a NUMA based server)?
    My understanding is Oracle would no longer know that some memory is Local (and therefore fast) and some memory is Remote (therefore slow). Overall I am guessing this could cause a real performance issue on this DB.
    -points for detailed reply!
    thanks a lot -

    Hello
    Thanks for your reply. I am very interested to hear further about the limitations of Windows 2008 and the benefits of Oracle Linux.
    Generally we find that Windows 2008 has been pretty good, a big improvement over Windows 2003 (bluescreens don't occur ever etc)
    Can you advise further about Large Page Size for the buffer cache? I assume this applies on both Windows and Linux (I am guessing there is a similiar parameter for 10gR2 on Linux).
    SAP have not yet fully supported Oracle 11g so this is why 11g has not made it into the SAP world yet.
    Can you also please advise about NUMA? regardless of whether we run Linux or Windows this setting needs to be considered.
    Thanks

  • Vi memory size verses file size

    I have a subvi that does not contain much code data block and front panel objects thus giving a total memory usage of 151kb as shown below. However the file size is almost 2 orders of magnitude bigger. What is the explanation of this?
    I also tried to copy and paste diagram to a blank vi and that file was twice the size of the original.
    Any idea as to what's causing the discrepancy in numbers and the large file size. I have some empty clusters going in and out of the subvi and I have an imaq create function within the subvi. But the numbers don't add up.
    Below is the VI memory use and front diagram showing mostly empty objects. I'd upload the program itself but it's a bit large, hence the problem!
    Thanks y'all!!!

    I figured it out!!!
    It's the "original image" constant that fed into the cluster. It was a constant I created off an image wire in another vi. I guess it carried over the image data when created. I deleted this constant and switched this to an "IMAQ create" function and viola, 35KB file now!!!
    Thanks for all your help people. I think it might be advisable when displaying front and block diagram memory,  that constants relating to IMAQ images were included. It's not readily identifiable unless you track down the culprit(s)
    From this  to  this 

  • Activity Monitor showing incorrect amount of virtual memory size

    Hi,
    I noticed that in Activity Monitor that the Virtual Memory size was incorrect.
    Does any body else have the same issue?

    Ignore that. The method it's calculated works as following:
    1. You have the Finder and Safari open.
    2. Both programs and no other ones use the content of a single block of data. This block takes up 2MB of RAM or swapfile space.
    3. The block is counted as 4MB in the VM size entry because it's being used twice.
    Expanding the scenario over many processes and blocks adds up to a VM size figure which is far larger than the actual amount in use or the combined total of RAM and swapfile space.
    (48360)

  • Large Memory Pages and Mapped Byte Buffers worthwhile?

    Hi
    I have just been reading about large memory pages that let the OS work with memory pages of 2M instead of 4k thus reducing the Translation-Lookaside Buffer Cache misses on the CPU. However my use case revolves around using mapped bytebuffers on a machine with 32GB of memory and file sizes of 400GB or more for the file underlying the mapped buffer, my program is very memory intensive and causes the OS to do alot of paging, hence my desire for any way to speed up access. Would using large memory pages in linux and the java vm option +UseLargePages have any affect on read/writing data from mapped byte buffers or is it only applicable to data that is always resident in memory?
    Thanks in advance

    Hi
    I have just been reading about large memory pages that let the OS work with memory pages of 2M instead of 4k thus reducing the Translation-Lookaside Buffer Cache misses on the CPU. However my use case revolves around using mapped bytebuffers on a machine with 32GB of memory and file sizes of 400GB or more for the file underlying the mapped buffer, my program is very memory intensive and causes the OS to do alot of paging, hence my desire for any way to speed up access. Would using large memory pages in linux and the java vm option +UseLargePages have any affect on read/writing data from mapped byte buffers or is it only applicable to data that is always resident in memory?
    Thanks in advance

  • Query is allocating too large memory error in OBIEE 11g

    Hi ,
    We have one pivot table(A) in our dashboard displaying , revenue against a Entity Hierarchy (i.e we have 8 levels under the hierarchy) And we have another pivot table (B) displaying revenue against a customer hierarchy (3 levels under it) .
    Both tables running fine under our OBIEE 11.1.1.6 environment (windows) .
    After deploying the same code (RPD&catalog) in a unix OBIEE 11.1.1.6 server , its throwing the below error ,while populating Pivot table A :
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    *State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 96002] Essbase Error: Internal error: Query is allocating too large memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits. (HY000)*
    But , pivot table B is running fine . Help please !!!!!
    data source used : essbase 11.1.2.1
    Thanks
    sayak

    Hi Dpka ,
    Yes ! we are hitting a seperate essbase server with Linux OBIEE enviorement .
    I'll execute the query in essbase and get back to you !!
    Thanks
    sayak

  • How do I set up iPhoto '11 version 9.2.1 to send emails with medium/Large/actual size? I try sending photos to a publisher and iPhoto automatically shrinks them to about 40 kB.  Publisher needs between 0.5 and 7 MB.  My ISP has a limit of 12MB

    How can I get to email photos at higher resolution?
    My "iPhoto" or "Mail" automatically reduces photos to about 40 KB .
    My photos, taken with a  common Panasonic Lumix DMC-TZ20 include flora which I send to a publisher.
    I am fairly new to my MacBook (Mac S X Version 10.6.8).  I have been using it for a year.
    I found I could not successfully send Large or Full size photos.
    A few days ago I downloaded a new version of iPhoto (iPhoto '11 version 9.2.1).
    Again, although I get the option of choosing to send small/medium/large//full size photos, it actually only sends them as "Small" (ie about 40kb) which is useless to publish.  If I drag a picture onto the desktop (where my EXIF tells me it is full size) then attach it to an email by dragging, againg the sent email is at "small" size ~40KB).
    My ISP (Westnet) tells me that their maximum accepted size is 12 MB.
    I suspect that I have failed to set up iPhoto for this purpose.  (I am well into retirement but, although geriatric, did study University Physics about half a century ago)

    Also, just a note.   iMac PPC is about iMacs that were available up to early 2006.    iMac Intel is for iMacs that were made available early in to 2006 to present.  Forums for what you have in your profile include:
    https://discussions.apple.com/community/mac_os/mac_os_x_v10.6_snow_leopard
    https://discussions.apple.com/community/ilife/iphoto
    https://discussions.apple.com/community/notebooks/macbook
    Please consider posting and reviewing those locations in the future when you have a question that pertains to one of those products.  You'll much more likely get a resposne then.   This is a user to user forum, and you have to seek knowledge in the areas that are most likely to have the topic of your interest.

Maybe you are looking for

  • Open Sales Orders Backlog Report

    Hi All: I have been asked to create Backlog reports for Open Sales Orders and the revenue amounts for the next 15days, one month, 2 months, 3 months 6 months etc etc. Does anyone know of a standard report that covers that and the business content and

  • Disk Utility Broken (fails to image or verify properly)

    Hey, my disk Utility seems to be broken. When I try to clone a drive it goes along fine, then ultimately creates something that doesn't pass it's own verification afterwards, and fails when I try to open the image. I repair both the cloned (main/inte

  • How to add radiobuttons in ALV Grid

    Hi, I refered the threads in SDN but i am not able to get proper solution for my requirement. I am displaying output in ALV Grid format. After displaying the output i mean in the output screen i need few buttons and radio buttons . when user selects

  • How to load  a dat file  data into CRM Application

    Hi, My sourcre file is .DAT File and I want load data in that .DAT file in CRM Application. Is it Possible? if yes then how. As this is urgent requirement please respond soon Thanks in Advance Raghu Edited by: 869619 on Aug 10, 2011 1:39 AM

  • PPOME HOLDER NAME IS not displayed

    Here is the case ..whena person is retired ..wht we do currently is run a retire action ...and the person is placed in a retirement org unit ..Wwe go to ppome ..we can can c the person who has retired . Now when he is dead ..his wife needs to get the