Some basic doubts

hi,
is there any measurement for table redefinition like index rebuild.
For index rebuid we are using indexstat.
how to find whether the archived log is necessary or not.
what is the difference between execution and fetch?
what the relation between
INITTRANS and MAXTRANS and freelist and freelist group and pctused and pctfree.
we are having oracle apps11.5.10which has may schemas.
we have the schema apps which has mostly views.we don't know the orgin of the schemas of the view.
how to find the orgin of the schemas and how to refresh the schema. where to refresh the schema whether orgin of schema and apps schema.

If you asked just one question at a time you would get more answers.
"how to find whether the archived log is necessary or not."
All archivelogs are needed for recovery. If they are backed up on tape, then you can delete from disk
"INITTRANS and MAXTRANS and freelist and freelist group and pctused and pctfree."
If you take the time to read the Oracle documentation, and use google, you should be able to find answers to most of your questions.
"we are having oracle apps11.5.10which has may schemas.
we have the schema apps which has mostly views.we don't know the orgin of the schemas of the view.
how to find the orgin of the schemas and how to refresh the schema. where to refresh the schema whether orgin of schema and apps schema."
What do you mean by "refresh the schema" ? In Oracle Apps you should refresh the whole database by cloning, not refresh individual schemas.

Similar Messages

  • Storage rules for an editing rig. Some basics.

    How do you set up your editing machine in terms of disks for maximum performance and reliability? (SSD's are left out here.)
    This is a question that often arises and all too often one sees that initial settings are really suboptimal. These rules are intended to help you decide how to setup your disks to get the best response times. Of course the only disks in an editing machine must be 7200 RPM types or faster. No GREEN disks at all.
    Rule 1: NEVER partition a disk. You may ask why? First of all, it does not increase disk space, it just allocates the space differently. However, the major drawback is that for a partitioned disk the OS must first access a partition table at the beginning of the disk for all accesses to the disk, thus requiring the heads to move to the beginning of the disk, then when it has gotten the partition info move to the designated area on the disk and perform the requested action. This means much more wear-and-tear on the mechanics of the disk, slower speeds and more overhead for the OS, all reducing efficiency.
    Rule 2: Avoid using USB drives, since they are the slowest on the market. Do not be tricked by the alleged bandwidth of USB 2.0 advertisements, because is just is not true and remember that the alleged bandwidth is shared by all USB devices, so if you have a USB mouse, keyboard, printer, card reader or whatever, they all share the bandwidth. Stick to SCSI or SATA disks or e-SATA. If needed, you can use Firewire-800 or even Firewire-400 disks, but they are really more suited for backups than for editing.
    Rule 3: Use at least 3 different physical disks on an editing machine, one for OS/programs, one for media and one for pagefile/scratch/renders. Even on a notebook with one internal drive it is easy to accomplish this by using a dual e-SATA to Express card connector. That gives you an additional two e-SATA connections for external disks.
    Rule 4: Spread disk access across as many disks as you have. If you have OS & programs on disk C:, set your pagefile on another disk. Also set your pagefile to a fixed size, preferably somewhere around 1.5 times your physical memory.
    Rule 5: Turn off index search and compression. Both will cause severe performance hits if you leave them on.
    Rule 6: If the fill rate on any of your SATA disks goes over 60-70% it is time to get a larger or an additional disk.
    Rule 7: Perform regular defrags on all of your disks. For instance, you can schedule this daily during your lunch break.
    Rule 8: Keep your disks cool by using adequate airflow by means of additional fans if needed. You can use SMART to monitor disk temperatures, which should be under 35 degrees C at all times and normally hover around 20-24 C, at least in a properly cooled system.
    Rule 9: If people want raid, the cheapest way is to use the on-board IHCR or Marvell chip, but it places a relatively high burden on the CPU. The best way is a hardware controller card, preferably based on the IOP348 chip. Areca ARC and ADAPTEC come to mind. 3Ware uses it's own chipset and though not bad, they are not in the same league as the other two. Promise and the like in the budget range are no good and a complete waste of money. Expect to spend around $ 800 plus for a good controller with 12 connectors internally and 4 e-SATA connectors. Important to consider in a purchasing decision is whether the on-board cache memory can be expanded from the regular 256/512 MB to 2 or even 4 GB. Be aware that 2 GB cache can be relatively cheap, but the 4 GB version extremely costly ($ 30 versus $ 300). For safety reasons it is advisable to include a battery backup module (BBM).
    Rule 10: If you can easily replace the data in case of disk failure (like rendered files), go ahead and use raid0, but if you want any protection against data loss, use raid 3/5/6/10/30/50. For further protection you can use hot spares, diminishing downtime and performance degradation.
    In general when you get a new disk, pay close attention to any rattling noise, do perform regular disk checks, and in case of doubt about reliability, exchange the disk under guarantee. Often a new disk will fail in the first three months. If they survive that period, most of the disks will survive for the next couple of years. If you use a lot of internal disks like I do (17), set staggered spin-up to around 1 second to lessen the burden on the PSU and improve stability.
    Hope this helps to answer some basic questions. If not, let me know. Further enhancements and suggestions are welcome.

    ...well, it is a northern D - they call us often "Fischköpfe" because we love to eat fish here in Hamburg!
    I just have summarized a bit the storage configuration I am thinking of
    RAID Type
    Objective
    System requirements
    RAID level
    Offline Storage
    store a whole video project (1h of 4k material requires about 128 GB)
    needs to be highly reliable (redundancy is a must);
    doesn't need to be extremely fast;
    discs can be cheap because they don't have a high burden (just upload & download) to the video RAID.
    10
    Video RAID
    store material for a day work
    fast and reliable
    10
    Installation RAID
    just to install Windows XP with CS4 Master Collection
    redundant but speed isn't critical here
    1
    Working RAID
    for page-file/scratch/renders
    as fast as possible
    disc failure isn't a big problem
    0
    In order to realize this, I am thinking of the following configuration
    RAID Type
    number of discs
    Type
    GB/disc
    tot. storage [GB]
    usable storage [GB]
    cost [€]
    Offline Storage
    8
    SATA
    1500
    6000
    4800
    900
    Video RAID
    6
    SCSI/SAS
    300
    900
    720
    2100
    Installation RAID
    2
    SCSI/SAS
    36
    36
    30
    200
    Working RAID
    4
    SCSI/SAS
    147
    580
    470
    1000
    Here are my assumptions and constraints:
    I only have 6 bays for the Installation - and working RAID;
    For the video RAID I also would like to reuse an enclosure, which just has 6 bays;
    I would need to buy a NAS enclosure - so here I am open minded and just assumed 8 bays;
    the usable storrage I estimated as 80% of the total storage;
    discs, which are used heavily should be SCSI or SAS - I am thinking of the Cheetah 15K
    Looking into the cost associated, I hit 4000€ easily just for discs. Ok, I can reuse some discs and enclosures, which I have here - but since I need to purchase the NAS enclosure (with 8 bays), which will also cost 1000€ additional,  I will use 25% of my foreseen budget for storage.

  • Basic Doubts

    Hello Experts,
    Please any one of them clarify my some of the Basic doubts
    1.Reg ODS activation failure in Process chain,I know how to correct incase if ods activation failed.I want to know,What could be the reason for failure and in background whats happening(Techincally)
    2.Reg Attribute change run failure in process chain,what could be the reason.(technically)
    3.Whats is the purpose of trfc and what we have to check in SM58.
    4.What is the diff between the IDOC(transfer method) and Idoc which is used to transfer the data in PSA transfer method..(For ex - If i am using the PSA transfer method,even there to transfer the datapackets,we are using Idoc)
    5.Some of the time,I am getting the error like this"Non updated idoc in source system and error mesg from the source system/Business warehouse.
    This mean,what could be the reason.(Frequently i am getting the above errors and If i used the repeat option,It has been corrected and loaded succesfully)
    6.How many datasources can be assigned to one Infosource(Is any limit is there)
    I request you to advise on the above doubts and I will assign the points.
    Thanks in advance
    Regards
    Kumar

    Hi,
    1. During the month end procedures and week ends the number of records will be more rather than the week days, so the data packages defined will run for long time, so we will be getting Time out error, we have to manually activate the data packages.
    2. if you r not using Attribute change run, then your Master data(text, attribute, hierarchy wont get activated when you run a process chain)
    .so it is mandatory that once you fill the master data u have to activate those, for that we need Attribute change run.
    3. tRFC calls which transfer IDocs use the function module IDOC_INBOUND_ASYNCHRONOUS at reception If an IDoc in the sending system has been passed to tRFC (IDoc status "03"), but has not yet been input in the receiving system, this means that the tRFC call has not yet been executed.
    In the standard SAP system, if tRFC errors occur, a background job is generated to re-establish the connection. In certain circumstances this could result in a large number of background jobs being started that completely block background processing. To cancel a background job if tRFC errors occur use program RSARFCEX to restart tRFC.we go for SM59 transaction for this.
    Choose Destination ® tRFC Options and select the option Suppress Background Job at Comm. Error. 
    4. PSA :
    The data is sent directly from the source system to the BW and it  is stored in the PSA (Persistent Staging Area), we can update  automatically/manually to corresponding Info providers .The transfer type here is called the TRF (Transactional Remote Function).
    IDOC :
    The data is packed into IDocs by the source system and sent to the Business Information Warehouse. In BW the data is saved persistently and intransparently in the IDoc store. we have some prerequisite here to use this method ,Transfer  transfer structure must not have more than 1000 Bytes in the character format.
    6. One datasource can be assigned to one info source.

  • Redologs basic doubt

    Hi,
    version is 10g, no archive log mode.....
    I have one basic doubt. please bare with me..... I have gone through documents..but still some doubts...
    One person asked me this doubt, when a long running transaction is going on , there is no commit, lot of redo activity is there, if we dont commit, those changed information is gone, when the redo log files are over written, is it true???? he asked like that,
    My answer is like below.......
    There is a long running transaction going on ( insert the data into one table).
    Redo log files are 10 MB size ( 2 Redo log groups with 2 members each).... when insertion is going on, changes are entering into redo log files. stil there is no commit.....
    (a) one redo log file got filled up
    (b) next log switch
    (c) next redo logfile got filled up
    see here still there is no commit
    (d)next system uses first redo log switch and overwrites the data in redo log file
    So, when coming to my doubt, all un commited data already written to data files and when we rollback, those data which is written in datafiles will be rolled back by using undo information in undo files.
    am i right????
    all the new information wile insertion, that data is written into datafiles. old data is written to undo files. when we roll back
    Please correct me, if my undestanding is wrong......
    Edited by: oraDBA2 on Oct 23, 2008 12:53 PM

    Dear friend,
    Oracle's Redo Log
    Each Oracle database has a redo log. This redo log records all CHANGES made in datafiles. The redo log makes it possible to replay SQL statements.
    Before Oracle changes data in a datafile it writes these changes to the redo log.
    As Oracle rotates through its redo log groups, it will eventually overwrite a group which it has already written to. Data that is being overwriten would of course be useless for a recovery scenario. In order to prevent that, a database can (and for production databases should) be run in archive log mode. Simply stated, in archive log mode, Oracle makes sure that online redo log files are not overwritten unless they have been savely archived somewhere.
    THUS, to ANWSER YOUR QUESTION: If Archive Log is ON, your LONG-RUNNING uncommited transaction is generating large volumes of ARCHIVE LOGS !!!!!
    ALSO:
    LGWR writes the redo log buffers to disk. The background process in charge for archiving redo logs is ARCn (if automatic archiving is enabled.)
    In order to find out in which mode the instance runs, one can use archive log list from within sql plus.
    Log Buffer: All changes that are covered by redo is first written into the log buffer. The idea to first store it in the memory is to reduce disk IO. Of course, when a transaction commits, the redo log buffer must be flushed to disk, because otherwise the recovery for that commit could not be guaranteed. It is LGWR (Log Writer) that does that flushing.
    Determining amount of generated redo log
    select
    n.name, t.value
    from
    v$mystat t join
    v$statname n
    on
    t.statistic# = n.statistic#
    where
    n.name = 'redo size';
    See also the package redo_diff
    RBA (Redo Byte Address)
    The RBA consists of three parts and is ten bytes long:
    * Log sequence number
    * Block number within this sequence
    * Offset within this block
    The location of each redo log entry is identified through an RBA. The RBAs are important for dirty db blocks in the buffer cache.
    Determining optimal redo log size
    The optimal size of redo log files can be queried with
    select optimal_logfile_size from v$instance_recovery;
    Good luck!

  • Very basic doubt regarding transporting abap developments

    Hi All,
    It may be very basic doubt, But I need an answer.
    If we develop some objects in the development and transported them to quality and for some reason some errors occured and the for correction of the errors we created some more requests and ported them to Quality.
    the number of requests are say more than 15. and if we lost track of sequece of the requests to be ported to production.
    How can we overcome this problem?
    Can we create a new transport request by right clicking the package (of course all the objects are in only one package) , this will solve the purpose. I mean to say if we transport this single request to quality then to production, all the things will get transported?
    Please give your valuable inputs..
    Thanks and Regards
    KK

    Hi All,
    I really appreciate all your quick replys.
    My question is if we create a new request from the package in SE80, can we leave the old requests,
    That is if I transport a request say req 2 of domain for solving the dependency error of the previous request say req1 ( of data element ). and again transport req 1 after transporting req 2, this will solve the dependency problem. Like this if some dependency related issues are there and we lose track of sequence of request which needs to be transported, what could be the ideal solution?
    So I mean to ask that if I create a new request for the package in SE80, will all the developments will be included in the request or do we need to follow the sequence only?
    Thanks and regards
    KK

  • Oracle Apps - 9iAS : Basic doubts

    Hello,
    I am new to Oracle Apps and 9iAS
    I have few basic doubts regarding the integration of these
    Can anybody please clarify...
    The middle tier of Orcale Apps is the 9iAS Application Server.
    Read that Oracle9i AS Components are :
    - J2EE and Internet Applications (sub-components are Oracle HTTP Server, OC4J, Web services etc)
    - Portals
    - Wireless
    - Web cache
    - Business Intelligence
    - E-Business Integration
    Read that middle tier of Oracle Apps has following servers
    - Web server
    - Forms server
    - Concurrent Processing server
    - Reports server
    - Discoverer server (optional)
    - Admin server
    So, all the components of 9iAS are installed with Oracle Apps ?
    If only some, then which are those ?
    Where do Concurrent Processing server, Reports server etc come from as these are not the components of 9iAS ?
    Are these very specific to Oracle Apps only ?
    As the core database administration knowledge is required for Apps administration (for managing database tier), isn't the 9iAS knowledge required (for managing the middle tier) ?
    If yes, then upto what level ?
    Knowing about Oracle HTTP Server, Apache server, Web cache, OC4J etc etc is required ?
    Please suggest some links / documents related to all these ?
    I found a couple of them on metalink & OTN, but not very useful !
    Thanks

    9iAS consist of Apache and an Oracle database on the middle tier.
    The other components come from the developer suite.
    http://otn.oracle.com/software/products/forms/index.html

  • Missing some basic functionality

    I seem to be missing some basic functionality, like switching to tty1-n using Ctrl+Alt+F1-n.  I can't begin to guess what this is related to, as after several installs I've never experienced that to be missing.  Can anyone point me in the right directions?  Thanks.

    evr wrote:I was recently having problems with commands like because i was using "thinkpad" as the input.xkb.layout value in /etc/hal/fdi/policy/10-keymap.fdi.  Changing to "us" helped me, perhaps that's the issue?
    Hm.  I don't actually have that file in the policy directory.

  • Some basic questions on File Adapter

    Hello all,
    I have some basic questions on XI and File Adapter and hope you can help me. Any answer is appreciated.
    1. Can I use NFS transport protocol to poll a file from a machine in the network, which is not the XI? Or do I have to use FTP instead?
    2. If I understand it correctly - when using the FTP-File Adapter, XI has the role of a ftp client. I have to run a ftp server on my distant machine. XI connects to FTP-Server and polls the file.
    Can it also be configured the other way round? The scenario I think of would be: FTP client installed on distant machine, which connects to FTP-Server(XI) and loads up a file. So XI would act as FTP Server.
    I know this works, if I install a ftp Server on the computer my XI runs on, and use the NFS-File Adapter to observe the folder. But I want to know, if I need a second, independant ftp server for this.
    3. And last but not least: When do I need the active ftp mode instead of passive?
    Thanx a lot for your answers!
    Ilona

    > Hello all,
    > I have some basic questions on XI and File Adapter
    > and hope you can help me. Any answer is appreciated.
    >
    >
    > 1. Can I use NFS transport protocol to poll a file
    > from a machine in the network, which is not the XI?
    <b>yes</b>
    > Or do I have to use FTP instead?
    >
    <b>also you can do it</b>
    > 2. If I understand it correctly - when using the
    > FTP-File Adapter, XI has the role of a ftp client. I
    > have to run a ftp server on my distant machine. XI
    > connects to FTP-Server and polls the file.
    > Can it also be configured the other way round? The
    > scenario I think of would be: FTP client installed on
    > distant machine, which connects to FTP-Server(XI) and
    > loads up a file. So XI would act as FTP Server.
    > I know this works, if I install a ftp Server on the
    > computer my XI runs on, and use the NFS-File Adapter
    > to observe the folder. But I want to know, if I need
    > a second, independant ftp server for this.
    >
    <b>XI cannot act as FTP server, but it is always a client. When XI is reading (File sender adpater) when XIis writing than it is File Receiver adapter</b>
    > 3. And last but not least: When do I need the active
    > ftp mode instead of passive?
    >
    <b>It depends on your firewall configuration. The best and the fastests is active mode but not always available.</b>
    > Thanx a lot for your answers!
    > Ilona

  • Some BOne Doubts

    Hi All,
    We are planning to do some scenarios which are of the form
    SAP R/3 <-> SAP XI <-> SAP Business One
    The scenarios are,
    1. Whenever a PO is getting created it should post some data
       to R/3 via XI.
    2. Goods issue (R/3 to BOne)
    3. Invoice (R/3 to BOne)
    For these need some basic clarifications,
    1. How BOne can send or receive data from or to SAP XI/any other system without using B1Integrator ?
    2. Where we will updating the info ? Are we retreiving data directly
       from the database of BOne ?
    3. What structure BOne will use for PO,Invoice,Delivery,etc
       ie in R/3 we have IDOC structures. Similarly do we have any structures
       in BOne ?? From where we can get these structures ?
    Any help will be appreciated.
    Thanks & regards,
    Chemmanz

    This is a question regarding B1 Integration. Pls post
    it on this thread:
    SAP Business One Integration Technology

  • Oracle Business INtelligence -installation, basic doubts

    can i shoot oracle BI installation related basic doubts, and other issues that may arise in this forum ???

    if it's enterprise edition.. you can post here..
    Business Intelligence Suite Enterprise Edition

  • C2-03 Lacking Some basic options

    I have just buy C2-03 and disappointed with some basic options, kindly resolve them in next software update-
    1- No option to Type message without opening slide keypad- please add onscreen keypad
    2- No option to remove keypad lock during call inprogress and also no option to set keypad lock time.
    3- No option to Mark messages to delete or move- please add mark option.
    4- No way to decrease brigtness of phone.
    5- No option to set different Ring tones for both SIM cards.
    Please make these option available through new software update, besides these weekness the phone is excellent.
    Regards-
    Jagdeep Bhatt
    jagdeep Bhatt
    India,

    Hi,
    I think this is same with the link below.
    http://discussions.europe.nokia.com/t5/Cseries/Nokia-C2-03-Lacking-Some-basic-options/m-p/1132449/hi...
    Br
    Mahayv

  • Some basic queries on OAF

    Hi All,
    I have some basic queries in OAF...
    1. What are the procedures to delete the Extensions?
    2. After extending a VO, why we have to upload .jpx file and if we don't upload what will happen?
    3. Can we use EO without altering the table to add WHO columns?
    4. Why we have to develpe the OAF pages in webui and VOs in server folders only? is there any specific reason?
    5. Are there any other methods to call the methods in AM apart from using am.invoke("<method name>") from CO?
    Please give me the answers for these queries.....
    Thanks in advance..
    Srinivas

    1. What are the procedures to delete the Extensions?
    Go to "functional administrator" responsibility. Under personalization Tab click on Import/export. Search for your document and delete the customization.*
    2. After extending a VO, why we have to upload .jpx file and if we don't upload what will happen?
    You need to upload the jpx file because in your jpx file you have Substitute tag which substitute OLD VO with new VO and at runtime framework will check if there is any substitution available then It will substitute the old VO with the new one.*
    3. Can we use EO without altering the table to add WHO columns?
    I think no because when you perform DML operations on EO then framework tries to update the WHO columns and if WHO columns are not present you would get an error message*
    4. Why we have to develpe the OAF pages in webui and VOs in server folders only? is there any specific reason?
    There is no specific reason for this we can create our PG files in server folders as well and it would work fine. This is just a standard given by Oracle.*
    5. Are there any other methods to call the methods in AM apart from using am.invoke("<method name>") from CO?
    You should only use am.invoke .*
    -- Arvind

  • I'm new to using dbms_scheduler. I'm trying out some basic stuff to see how it works

    I have not used the dbms_scheduler package before and trying out some basic functionality to see and understand how it works. The below code is what I have writtern
    BEGIN
    DBMS_SCHEDULER.create_job(
    job_name => 'Test_Job3',
    job_type =>  'PLSQL_BLOCK',
    job_action => 'BEGIN pr; END;',
    start_date => systimestamp,
    repeat_interval => 'freq=secondly;bysecond=4;',
    end_date => null,
    enabled => true,
    comments => 'This job is test for dbms_scheduler'
    END;
    create procedure pr
    is
    begin
    DBMS_OUTPUT.PUT_LINE('Inside the pr procedure called using scheduler');
    end;
    According to my understanding it should print the line inside the procedure 'pr' every 4 seconds, but I don't see any output being shown. Can someone help me understand what exaclty is wrong with my code. I'm using Toad for oracle.

    One more question - I'm trying to bring in one more functionality. Lets says - There is a job that needs to be executed every month at a particular time. I schedule it, but I want this job to be executed 'n' number of times.
    For example - There is procedure that is called in the scheduler. Since its processes lot of records - I'm breaking down in chunks of data - Say 5 chunks. Now when its scheduled each month at a particular time - it should ideally execute 5 times in order to complete the job for that month. How can this be achieved ? I thought of using max_runs, but that might end the job and never repeat it again.

  • Is the i7 13" Macbook Air capable of playing some basic games?

    I'm looking into possibly grabbing a 13" Macbook Air with the i7 1.7Ghz processort and was wondering if it's capable of playing some basic games that are out in the Mac App Store such as Angry Birds, N.O.V.A. 2, or even Grand Theft Auto: San Andreas? Also I tend to use Vuze and sometimes Handbrake, would the high end 13" Macbbok Air be capable of doing all this for me?

    Capable? Yes indeed it is. However, if the game or application is CPU or GPU intensive, then you can expect longer waits for completion of tasks. It's not so much noticeable on rudimentary games, but it really has an effect on detailed fluid motion in games that rely on such.
    Handbrake tasks will just take longer to complete, it won't stop it.
    I use my MBA's for medium intensity gaming without any complaints. But really, you should refer to the system requirements on each, on a case by case basis.

  • Making some basic Java games

    Can anyone help on how to make some basic java games for my website if you could that would be helpful
    thanks,
    Louie

    Well, first of all you program the Framework, add Graphics / sounds / Data , then you put it in your HP - There you are!

Maybe you are looking for

  • Error "disk_not_accessible" when installing NW7.0 Java on Win XP

    Hello, I'm trying to install the SAP NW 7.0 Java Edition trial on Windows XP. The process fails at step 14 of 40 when creating the database. The symptoms and error are identical to the ones described in [this thread|Installation of SAPNW2004sJavaSP9_

  • 0FI_GL_4..How can I add G/L account to the selection

    Hello, I have a need to add aditional selection for G/L account  to the 0FI_GL_4. By default it has FiscalYear/Period and Company Code. I wrote a program to update the the table ROOSFIELD so that the selection field for this dataSourece are enabled f

  • Decode failures using ora:getAttachmentContent or ora:getAttachmentProperty functoins

    Hi, I have a BPEL process that is started using the ums-email-adapter. The bpel process should abstract each attachment in the email and upload it to an ORacle BPM 12c ACM case. In the basis I got this working, with pretty simple emails. But in other

  • Block access through listener by IP Range - Wildcard

    Hi, I know you can block access to oracle database through listener using following configuration in sqlnet.ora tcp.validnode_checking = yes tcp.invited_nodes = (hostname1, hostname2) tcp.excluded_nodes = (192.168.10.3)but, is there a way to give IP-

  • How to store 10,000 in a currency field of length 15.

    Hi Experts wa_char of type  char length 255. Now by some program i am getting 10,000.00 in wa_char. I had to pass it to a vairable  wa_curr of currency type of length 15. I tried to firs pass to numc type variabl or P type variable then to pass it to