Limitation on data file size for Oracle 8i on window 2000

What is the size limitation for each Oracle data file ?
Oracle 8i
Window 2000 server (32-bit)

Hi,
You can get details from the Doc itself
Refer : http://www.taom.ru/docs/oradoc.817/server.817/a76961/ch43.htm#11789 (Oracle8i Reference Release 2 (8.1.6) )
Check 10g also : http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm (10g Release 2 (10.2) )
- Pavan Kumar N

Similar Messages

  • Suggested data file size for Oracle 11

    Hi all,
    Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    Any help would be greatly appreciated.
    Thanks!

    Ben Daniels wrote:
    Hi all,
    >
    > Creating a new system (SolMan 7.1) on AIX 6.1 running Oracle 11. 
    >
    > I have 4 logical volumes for data sized at 100gb each.  During the installation I'm being asked to input the size for the data files. The default is "2000mb/2gb" is this acceptable for a system sized like mine, or should I double them to 4gb each? I know the max is 32gb per data file but that seems a bit large to me.  Just wanted to know if there was a standard best practice for this, or a formula to use based on system sizing.
    >
    > I was not able to find any quick suggestions in the Best Practices guide on this unfortunately...
    >
    > Any help would be greatly appreciated.
    >
    > Thanks!
    Hi Ben,
    Check the note 129439 - Maximum file sizes with Oracle
    Best regards,
    Orkun Gedik

  • Optimal NTFS block size for Oracle 11G on Windows 2008 R2 (OLTP)

    Hi All,
    We are currently setting up an Oracle 11G instance on a Windows 2008 R2 server and were looking to see if there was an optimal NTFS block size. I've read the following: http://docs.oracle.com/cd/E11882_01/win.112/e10845/specs.htm
    But it only mentioned the block sizes that can be used (2k - 16k). And basically what i got out of it, was the different block szes affect the max # of database files possible for each database.
    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?
    Thanks in advance

    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?ideally FS block size should be equal to Oracle tablespace block size.
    or at least be N times less than Oracle block size.
    For example - if Oracle BS=8K then NTFS BS better to be 8K but also can be 4K or 2K.
    Also both must be 1 to N times of Disk sector size. Older disks had sectors 512 bytes.
    Contemporary HDDs have internal sector size 4K. Usually.

  • Golden Gate 10.4.0 for Oracle 11g on Windows 2000, XP and 2003

    Hello,
    I am looking for Oracle Golden gate 10.4 for my demo/poc. Is this available for download from oracle.
    I could not find the OGG 10, if any one has the link, please let me know.
    Thanks,

    Have you looked in Edelivery?
    Oracle Fusion Middleware, Windows 32-bit, media pack for GoldenGate (at the bottom of the page), then pick the release for Oracle 10g or 11g
    https://edelivery.oracle.com

  • Maximum Data file size in 10g,11g

    DB Versions:10g, 11g
    OS & versions: Aix 6.1, Sun OS 5.9, Solaris 10
    This is what Oracle 11g Documentation
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm
    says about the Maximum Data file size
    Operating system dependent. Limited by maximum operating system file size;typically 2^22 or 4 MB blocksI don't understand what this 2^22 thing is.
    In our AIX machine and ulimit command show
    $ ulimit -a
    time(seconds)        unlimited
    file(blocks)         unlimited  <-------------------------------------------
    data(kbytes)         unlimited
    stack(kbytes)        4194304
    memory(kbytes)       unlimited
    coredump(blocks)     unlimited
    nofiles(descriptors) unlimited
    threads(per process) unlimited
    processes(per user)  unlimitedSo, this means, In AIX that both the OS and Oracle can create a data file of any Size. Right?
    What about 10g, 11g DBs running on Sun OS 5.9 and Solaris 10 ? Is there any Limit on the data file size?

    How do i determine maximum number of blocks for an OS?df -g would give you the block size. OS blocksize is 512 bytes on AIX.
    Lets say the db_block_size is 8k. What would the maximum file size for data file in Small File tablespace and Big File tablespace be?Smallfile (traditional) Tablespaces - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. - 32G
    A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    HTH
    -Anantha

  • Impact of data file size on DB performance

    Hi,
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?
    1. Bigger data file size but less number of files (ex. 2 files with size 8G each)
    2. Smaller data file size but more number of files (ex. 8 files with size 2G each)
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)
    Kindly share your experiences with determining optimal file size.
    Please let me know in case you need any DB statistics.
    Few details are as follows:
    OS: Solaris 10
    Oracle: 10gR2
    DB Size: 80G (Approx)
    Data Files: UserData - 6 (15G each), UNDO - 2 (8G each), TEMP - 2 (4G each)
    Thanks,
    Ullhas

    Ullhas wrote:
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?Size or number really does not matter assuming other variables constant. More files results in more open file handles, but in your size db, it matters not.
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)Remember this when tuning I/O: The fastest I/O is the one that never takes place! High I/O may very well be a symptom of unnecessary FTS or poor execution plans. Validate this first before tuning I/O and you will be much better off.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Event ID: 4, Source: Microsoft-Windows-Kernel-EventTracing, maximum file size for session "ReadyBoot" has been reached.

    Hello,
    I upgraded my machine to Win7 x64 Pro about 3 weeks ago. My HW is an Asus mobo, Intel Q9450 w/8GB RAM. The boot drives are two Raptors configured as RAID01. All the drivers are the latest available from Intel, Asus and 3rd party vendors. My WEI is 5.9, limited by the disk transfer rates, otherwise 7.1 and 7.2 on the other indexes.
    I've been receiving these errors at boot;
    Log Name:      Microsoft-Windows-Kernel-EventTracing/Admin
    Source:        Microsoft-Windows-Kernel-EventTracing
    Date:          11/10/2009 7:51:03 AM
    Event ID:      4
    Task Category: Logging
    Level:         Warning
    Keywords:      Session
    User:          SYSTEM
    Computer:      herbt-PC
    Description:
    The maximum file size for session "ReadyBoot" has been reached. As a result, events might be lost (not logged) to file "C:\Windows\Prefetch\ReadyBoot\ReadyBoot.etl". The maximum files size is currently set to 20971520 bytes.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-Kernel-EventTracing" Guid="{B675EC37-BDB6-4648-BC92-F3FDC74D3CA2}" />
        <EventID>4</EventID>
        <Version>0</Version>
        <Level>3</Level>
        <Task>1</Task>
        <Opcode>10</Opcode>
        <Keywords>0x8000000000000010</Keywords>
        <TimeCreated SystemTime="2009-11-10T12:51:03.393985600Z" />
        <EventRecordID>28</EventRecordID>
        <Correlation />
        <Execution ProcessID="4" ThreadID="164" />
        <Channel>Microsoft-Windows-Kernel-EventTracing/Admin</Channel>
        <Computer>herbt-PC</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="SessionName">ReadyBoot</Data>
        <Data Name="FileName">C:\Windows\Prefetch\ReadyBoot\ReadyBoot.etl</Data>
        <Data Name="ErrorCode">3221225864</Data>
        <Data Name="LoggingMode">0</Data>
        <Data Name="MaxFileSize">20971520</Data>
      </EventData>
    </Event>
    The image for PID 4 is listed as System.
    My searches have turned up similar events listed but no solutions.
    Any help would be appreciated.
    Cheers!

    Session "Circular Kernel Context Logger" failed to start with the following error: 0xC0000035
    As suggested above I assume this is a microsoft issue?  It has been discussed here and other forums for quite some time.  I never have seen a fix?  I wish when we received errors of this nature microsoft would tell us what they were.  How is this related to superfetch?  What is superfetch?  Why would superfetch have changed?
    BY THE WAY....  Superfetch is on(started) is on automatic and logs on as local system.  So this is not the cause of my issue.  Also what is readyboot?  Does the average computer really know what these programs/services or unique microsoft words/terms are?
    System
    Provider
    [ Name]
    Microsoft-Windows-Kernel-EventTracing
    [ Guid]
    {B675EC37-BDB6-4648-BC92-F3FDC74D3CA2}
    EventID
    2
    Version
    0
    Level
    2
    Task
    2
    Opcode
    12
    Keywords
    0x8000000000000010
    TimeCreated
    [ SystemTime]
    2010-04-11T14:35:49.829600000Z
    EventRecordID
    25
    Correlation
    Execution
    [ ProcessID]
    4
    [ ThreadID]
    48
    Channel
    Microsoft-Windows-Kernel-EventTracing/Admin
    Computer
    Daddy-PC
    Security
    [ UserID]
    S-1-5-18
    EventData
    SessionName
    Circular Kernel Context Logger
    FileName
    ErrorCode
    3221225525
    LoggingMode
    268436608
    Windows7, Windows, Win7

  • Sql loader maximum data file size..?

    Hi - I wrote sql loader script runs through shell script which will import data into table from CSV file. CSV file size is around 700MB. I am using Oracle 10g with Sun Solaris 5 environment.
    My question is, is there any maximum data file size. The following code from my shell script.
    SQLLDR=
    DB_USER=
    DB_PASS=
    DB_SID=
    controlFile=
    dataFile=
    logFileName=
    badFile=
    ${SQLLDR} userid=$DB_USER"/"$DB_PASS"@"$DB_SID \
              control=$controlFile \
              data=$dataFile \
              log=$logFileName \
              bad=$badFile \
              direct=true \
              silent=all \
              errors=5000Here is my control file code
    LOAD DATA
    APPEND
    INTO TABLE KEY_HISTORY_TBL
    WHEN OLD_KEY <> ''
    AND NEW_KEY <> ''
    FIELDS TERMINATED BY ','
    TRAILING NULLCOLS
            OLD_KEY "LTRIM(RTRIM(:OLD_KEY))",
            NEW_KEY "LTRIM(RTRIM(:NEW_KEY))",
            SYS_DATE "SYSTIMESTAMP",
            STATUS CONSTANT 'C'
    )Thanks,
    -Soma
    Edited by: user4587490 on Jun 15, 2011 10:17 AM
    Edited by: user4587490 on Jun 15, 2011 11:16 AM

    Hello Soma.
    How many records exist in your 700 MB CSV file? How many do you expect to process in 10 minutes? You may want to consider performing a set of simple unit tests with 1) 1 record, 2) 1,000 records, 3) 100 MB filesize, etc. to #1 validate that your shell script and control file syntax function as expected (including the writing of log files, etc.), and #2 gauge how long the processing will take for the full file.
    Hope this helps,
    Luke
    Please mark the answer as helpful or answered if it is so. If not, provide additional details.
    Always try to provide actual or sample statements and the full text of errors along with error code to help the forum members help you better.

  • Essbase cube data file size

    Hi,
    Why is it showing different numbers about my data file size in EAS>database>edit>properties>storage and a full database export.
    Thanks,
    KK

    Tim, in all seriousness, I am not a stalker. Honestly. You just post about things that interest me/we share some of the same skills. Alternatively, Glenn stalks me, I stalk you, it's time for you to become a sociopath too and stalk someone else.
    Okay, with that insanity out of the way, the other thing that could have a big impact on export size is if the OP did a level zero or input level export as opposed to a full export. In BSO databases in particular, that can lead to a very, very large set of .PAG (and to some extent .IND files) and a rather small export file size as the calculated values aren't gettting written out.
    If the export is done through EAS I think the default is level zero.
    Regards,
    Cameron Lackpour
    Edited by: CL on Sep 23, 2011 2:38 PM
    Bugger, the OP wrote "full database export". Okay, never mind, I am a terrible stalker. I agree with everything Tim wrote re compression.
    In an effort to add something useful, if you use the new-ish database archive feature in MaxL, you will essentially get .PAG files + .IND files combined into a single binary file. I used to be a big fan of full restructures, clears, and reloads to do defrags, but now go with the restructure command. Despite the fact that it's single threaded, in my limited testing (only done it at one client) it was faster as the export all, clear, reload. If you combine that with the archive mode you might have a better approach.
    Regards,
    Cameron Lackpour

  • What is an idea of maximum file size for a film in Captivate?

    Hi there,
    I'm creating an elearning course in Captivate 7, and it is being published as HTML5. This means the films I've imported are being converted to MP4s, and they are around 10-20mb in size once they've converted. They seem very slow to load on some computers - do you think the file size is too big? Or could it be another issue? Does anyone have a recommendation for maximum file size for films? They are 572 x 322px and around 1-2 mins in length.
    Thanks in advance

    Probably better guidelines that dictates 'size' of a vi are:
    typically no more than 1 video screen in size
    is it legible
    and is it's function and operation clear.
    I have seen examples of '1 vi does it all' that were many screens wide and tall, totally a flustercuck, and nearly 1 MB in size.
    Globals and local variables (except for LV2 style) are typcially shunned for they can create a host of problems (race conditions, indeterminant data).
    Use connector pane to wire controls and indicators to. Then use wires between vi's to transfer data. I tend to use clusters to hold shared data.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~
    "It’s the questions that drive us.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

  • Negative data file size

    RDBMS: oracle 10g R2
    when execute to statement to determinate size of data files, the data file DATA8B.ORA is negative, why?
    before droped a table with 114,000,000 rows in this tablespace.
    FILE_NAME FILE_SIZE USED PCT_USED FREE
    G:\DIL\DATA5D.ORA 4096 3840.06 93.75 255.94
    Total tablespace DATA5---------------------------> 16384 14728.24 89.9 1655.76
    I:\DIL\DATA6A.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6B.ORA 4096 3456.06 84.38 639.94
    I:\DIL\DATA6C.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6D.ORA 4096 3520.06 85.94 575.94
    Total tablespace DATA6---------------------------> 16384 14016.24 85.53 2367.76
    G:\DIL\DATA7A.ORA 4096 3664.06 89.45 431.94
    G:\DIL\DATA7B.ORA 4096 3720.06 90.82 375.94
    G:\DIL\DATA7C.ORA 4096 3656.06 89.26 439.94
    G:\DIL\DATA7D.ORA 4096 3728.06 91.02 367.94
    G:\DIL\DATA7E.ORA 4096 3728.06 91.02 367.94
    Total tablespace DATA7---------------------------> 20480 18496.3 90.3 1983.7
    G:\DIL\DATA8A.ORA 3500 2880.06 82.29 619.94
    G:\DIL\DATA8B.ORA 4000 -2879.69 -71.99 6879.69
    Total tablespace DATA8---------------------------> 7500 0.37 5.14 7499.63

    the query is:
    select substr(decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    rpad('TOTAL:', 48, '=') || '>>',
    rpad('Total tablespace ' || b.tablespace_name,
    49,
    '-') || '>'),
    b.file_name),
    1,
    50) file_name,
    sum(round(Kbytes_alloc / 1024, 2)) file_size,
    sum(round((kbytes_alloc - nvl(kbytes_free, 0)) / 1024, 2)) used,
    decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbs,
    2)),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbsfile,
    2))),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) / kbytes_alloc) * 100,
    2))) pct_used,
    sum(round(nvl(kbytes_free, 0) / 1024, 2)) free
    from (select sum(bytes) / 1024 Kbytes_free,
    max(bytes) / 1024 largest,
    tablespace_name,
    file_id
    from sys.dba_free_space
    group by tablespace_name, file_id) a,
    (select sum(bytes) / 1024 Kbytes_alloc,
    tablespace_name,
    file_id,
    file_name,
    count(*) over(partition by tablespace_name) nbtbsfile,
    count(distinct tablespace_name) over() nbtbs
    from sys.dba_data_files
    group by tablespace_name, file_id, file_name) b
    where a.tablespace_name(+) = b.tablespace_name
    and a.file_id(+) = b.file_id
    group by rollup(b.tablespace_name, file_name);
    the same negative data file size on Database Control...

  • S1000 Data file size limit is reached in statement

    I am new to Java and was given the task to trouble shoot a java application that was written a few years ago and no longer supported. The java application creates database files the user's directory: diwdb.properties, diwdb.data, diwdb.lproperties, diwdb.script. The purpose of the application is to open a zip file and insert the files into a table in the database.
    The values that are populated in the diwdb.properties file are as follows:
    #HSQL Database Engine
    #Wed Jan 30 08:55:05 GMT 2013
    hsqldb.script_format=0
    runtime.gc_interval=0
    sql.enforce_strict_size=false
    hsqldb.cache_size_scale=8
    readonly=false
    hsqldb.nio_data_file=true
    hsqldb.cache_scale=14
    version=1.8.0
    hsqldb.default_table_type=memory
    hsqldb.cache_file_scale=1
    hsqldb.log_size=200
    modified=yes
    hsqldb.cache_version=1.7.0
    hsqldb.original_version=1.8.0
    hsqldb.compatible_version=1.8.0
    Once the databsae file gets to 2GB it brings up the error meessage 'S1000 Data file size limit is reached in statement (Insert into <tablename>......
    From searching on the itnernet it appeared that the parameter hsqldb.cache_file_scale needed to be increased & 8 was a suggested value.
    I have the distribution files (.jar & .jnlp) that are used to run the application. And I have a source directory that was found that contains java files. But I do not see any properties files to set any parameters. I was able to load both directories into NetBeans but really don't know if the files can be rebuilt for distribution as I'm not clear on what I'm doing and NetBeans shows errors in some of the directories.
    I have also tried to add parameters to the startup url: http://uknt117.uk.infores.com/DIW/DIW.jnlp?hsqldb.large_data=true?hsqldb.cache_file_scale=8 but that does not affect the application.
    I have been struggling with this for quite some time. Would greatly appreciate any assistance to help resolve this.
    Thanks!

    Thanks! But where would I run the sql statement. When anyone launches the application it creates the database files in their user directory. How would I connect to the database after that to execute the statement?
    I see the create table statements in the files I have pulled into NetBeans in both the source folder and the distribution folder. Could I add the statement there before the table is created in the jar file in the distribution folder and then re-compile it for distribution? OR would I need to add it to the file in source directory and recompile those to create a new distribution?
    Thanks!

  • Maximum file size for export into MP4?

    Hello,
    I am not able to export 2 hour HD video into standard MP4 file. It seems that reaching 100% export algorithm gets into loop. I was waiting for hours and still had seen progress at exactly 100% with final file size on hard disk to be 0 bytes. I am using CS5 on MAC OS X. I had to split my timeline to 2 parts and to export them separately (which is embarrasing). Is there something like maximum file size for export? I guess that 2h video would have about 25-35GB.
    Thank you
    jiri

    You are right.
    So I am running AP Pro 5.0.4, Adobe Media Encoder 5.0.1.0 (64bit). Operating system Mac OS X ver 10.7.3. All applications are up to date.
    MacBook Pro Intel i5 2.53GB, 8GB RAM, nVidia GT 330M 256 MB, 500GB HDD
    Video is 1920x1080 (AVCHD) 25fps in a .MTS container  (major part of timeline), 1280x720 30fps in .MOV container (2mins), Still images 4000x3000 in .JPG
    No error message is generated during export - everything finishes without any problem...just file created has 0 byte size (as described above).
    This is my largest video project (1h 54min) I dont have any other problem with other projects.
    I dont run any other special software, at the moment of export all usual applications are closed so that MacBook "power" can go to Media Encoder. No codecs installed, using VLC Player or Quick Time.
    Attached please find printscreen from Export settings (AP Pro). Writing this ppost I tried to export only the first 4mins from timeline where all kind of media is used...and it was OK.
    As a next step I will try to export (same settings) 1h 30mins as I still believe problem comes with length of video exported.
    Let me know your opinion

  • File size for Show & Share

    Hi all.
    We have a SnS system version 5.2. When we try to upload a video larger than 2 Gig we get that the file is to large to upload. But in the documentation I have read that there is no limitation to the file size, but obviously this is not correct. So the qestion is if there are some parameters that can be edited to change the maximum file size to uploaded?

    Hello,
    This is a known limitation referenced in the Release notes for DMS 5.2.1.
    Release Notes for Cisco Digital Media Suite 5.2.x
    http://www.cisco.com/en/US/docs/video/digital_media_systems/5_x/5_2/dms/release/notes/dms52rn.html#wp180130
    CSCso78514 - DOCU1 - stay uploading menu forever if add a media size > 2G in Media Library
    Description:
    Using the local file option to add a media asset that is larger than 2G causes the upload
    menu to remain open indefinitely.
    Work Around:
    None. This is a browser limitation. We recommend that you upload a file that is smaller than 2G
    and that you use an external server for large files.
    On a Side note:
    For a video content file at 2GB or above, Depending on how the video was encoded this
    file size could represent anywhere from 1-4 hours of video.  However, most
    organizations have implemented a best practice segmenting videos down to much
    smaller duration.  Cisco itself recommends video durations of no more than 20 minutes
    due to enduser's attention span.
    If this answers your question, Please take time to mark this
    discussion answered & rate the response.
    Thank You!
    T.

  • SAP Data File size considerably reduced after Unicode Conversion

    Hello Experts
    I have just performed a CUUC (Upgrade along with unicode conversion) from R/3 4.7 to ECC 6.0 EHP5. The data size that i earlier had was close to 463 GB (MSSQL MDF, LDF Files), after the data export to be converted to unicode the size was 45 GB (10% of actual data size, i feel this is normal after heterogeneous system copy), but after the import again the DATA file size is only 247 GB, is this normal ? or have i lost some data ? For ex, I tried checking tables like MSEG and the number of entries have reduced from 15,678,790 to 15,290,545.
    Could you kindly let me know if there is a way to check from Basis perspective if i have not lost any data ? I have followed all the procedures as per SAP Standards.
    Waiting for your quick reply.
    Best Regards
    Pritish

    Hi Nicholas
    Data is compressed during the new R3load procedure is understood, but why number of table entries have reduced is something which is still a question to me (in some cases increased as well) ? For example      
                                 Source          Target
    STPO                  415,725          412,150
    STKO                 126,710          126,141
    PLAF                 74,671                   78,336
    MDKP                 193,487          192,747
    MDPB                 55,329                   63,557
    Any suggestions or ideas ?
    Best Regards
    Pritish

Maybe you are looking for

  • FI Open Interface vs FQEVENTS

    Hi, My understanding is that FQEVENTS transaction is only active for FICA (contract account). Please correct. Is this also available for FI-AP? If FQEVENTS is not available for FI-AP, we then need to use BTE (Business Transaction Evetnts-FIBF). I hav

  • Partial Clear noted item

    Dear All, I have book a noted item for vendor through F-57 now i have to clear partial amount is this possible. Please suggest. Thanks Pramod

  • Need help setting up JDBC with mySQL and Netbeans

    I've successfully got mySQL up and running and created a few simple test databases. I've been following the instructions on this website (http://www.stardeveloper.com/articles/display.html?article=2003090401&page=1 ) to get JDBC working but have had

  • JavaBeans code generation from XSD

    Hi everyone, I know that this is not a new question, but I still did not find a satisfying answer. Does anyone know a good code generator for creating JavaBeans out of XSD schemas? The output of this tool has to comply with the JavaBeans specificatio

  • Help I stuck on "This field cannot be summarized"

    I'm trying to build a report that measures the rate a clinic exceeds standard visit resolution. For each Clinic I need a count of patients with 1 or more visits exceeding standard or @Yes_No = "Yes" Group 1 = Clinic Name Group 2 = PatID Detail fields