Inter module data transfer in FPGA

Hi everyone, i am working on a project in which i have PXI system with 8135 RT controller, 7854-R and 7951-R modules.
I am having a PID loop VI running in 7854-R fpga card.The output if PID is sent to RT vi every 10 micro sec.
In RT vi the values are passed to 7951-R vi, where based on these values a PWM signal is getting generated.
The problem is if i pass the values every 10 micro sec, i am not able to generate PWM signal, but if i change rate to 10 milli sec, then its working perfect i am able to generate PWM properly. Please anyone suggest me what i should do and tell me if their are any benchmarks for loop rates which i should consider.Please do refer the vi's if required.
Attachments:
PWM generate.vi ‏50 KB
FPGA 1.vi ‏792 KB
typical problem RT.vi ‏1142 KB

Hi,
you can also use *LOOKUP to transfer data between applications, take a look at this.
BPC LOOKUP Script logic
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e025fa8d-1c22-2e10-cd9f-c488c7eeadd4?quicklink=index&overridelayout=true
hope it helps

Similar Messages

  • Most efficient data transfer between RT and FPGA

    This post is related to THIS post about DMA overhead.
    I am currently investigating themost efficient way to transfer a set of variables to a FPGA target for out application.  We have been using DMA FIFOs for communications in both directions (to and from FPGA) but I'm recently questioning whether this is the most efficient approach.
    Our application must communicate several parameters (around 120 different variables in total) to the FPGA.  Approximately 16 of these are critical meaning that they must be sent every iteration of our RT control loop.  The others are also important but can be sent at a slightly slower rate without jeopardising the integrity of our system.  Until now we have sent these 16 critical parameters plus ONE non-critical parameter over a DMA to the FPGA card.  Each 32-bit value sent incorporates an ID which allows the FPGA to demultiplex to the appropriate global variables on the FPGA.  Thus over time (we run a 20kHz control loop on the RT system - we have a complete set of paramaters sent @ approx. 200Hz).  The DMA transfers are currently a relatively large factor in limiting the execution speed of our RT loop.  Of the 50us available per time-slot running at 20kHz approximately 12-20us of these are the DMA transfers to and from the FPGA target.  Our FPGA loop is running at 8MHz.
    According to NI the most efficient way to transfer data to a FPGA target is via DMA.  While this may in general be true, I have found that for SMALL amounts of data, DMA is not terribly efficient in terms of speed.  Below is a screenshot of a benchmark program I have been using to test the efficiency of different types of transfer to the FPGA.  In the test I create a 32MB data set (Except for the FXP values which are only present for comparison - they have no pertinence to this issue at the moment) which is sent to the FPGA over DMA in differing sized blocks (with the number of DMA writes times the array size being constant).  We thus move from a single really large DMA transfer to a multitude of extremely small transfers and monitor the time taken for each mode and data type.  The FPGA sends a response to the DMA transfers so that we can be sure that when reading the response DMA that ALL of the data has actually arrived on the FPGA target and is not simply buffered by the system.
    We see that the minimum round-time for the DMA Write and subsequent DMA read for confirmation is approximately 30us.  When sending less than 800 Bytes, this time is essentially constant per packet.  Only when we start sending more than 800 Bytes at a time do we see an increase in the time taken per packet.  A packet of 1 Byte and a packet of 800 Bytes take approxiamtely the SAME time to transfer.  Our application is sending 64 Bytes of critical information to the FPGA target each time meaning that we are clearly in the "less efficient" region of DMA transfers.
    If we compare the times taken when communication over FP controls we see that irrespective of how many controls we write at a time, the overall throughput is constant with a timing of 2.7us for 80 Bytes.  For a small dedicated set of parameters, the usage of front panel controls seems to be significantly faster than sending per DMA.  Once we need to send more than 800 Bytes, the DMA starts to become rapidly more efficient.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

    So to continue:
    For small data sets the usage of FP controls may be faster than DMAs.  OK.  But we're always told that each and every FP control takes up resources, so how much more expensive is the varsion with FP controls over the DMA.
    According to the resource usage guide for the card I'm using (HERE) the following is true:
    DMA (1023 Elements, I32, no Arbitration) : 604 Flip-Flops 733 LUT 1 Block RAM
    1x I32 FP Control: 52 Flip-Flops 32 LUTs 0 Block RAM
    So the comparison would seem to yield the following result (for 16 elements).
    DMA : 604 FLip-Flops 733 LUT 1 Block RAM
    FP : 832 FLip-Flops 512 LUT 0 Block RAM
    We require more FLip-Flops, less LUTs and no Block RAM.  It's a swings and roundabouts scenario.  Depending on which resources are actually limited on the target, one version or the other may be preferred.
    However, upon thinking further I realised something else.  When we use the DMA, it is purely a communications channel.  Upon arrival, we unpack the values and store them into global variables in order to make the values available within the FPGA program.  We also multiplex other values in the DMA so we can't simply arrange the code to be fed directly from the DMA which would negate the need for the globals at all.  The FP controls, however, ARE already persistent data storage values and assuming we pass the values along a wire into subVIs, we don't need additional globals in this scenario.  So the burning question is "How expensive are globals?".  The PDF linked to above does not explicitly mention the difference in cost between FP controls and globals so I'll have to assume they're similar.  This of course massively changes the conclusion arrived to earlier.
    The comparison now becomes:
    DMA + Globals : 1436 Flip-Flops 1245 LUTs 1 Block RAM
    FP : 832 FLip-Flops 512 LUT 0 Block RAM
    This seems very surprising to me.  I'm suspiscious of my own conclusion here.  Can someone with more knowledge of the resource requirements differences between Globals and FP controls weigh in?  If this is really the case, we need to re-think our approach to communications between RT and FPGA to most likely employ a hybrid approach.
    Shane.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

  • Data Transfer methods between FPGA and RT

    I have 10 values that I combined into a cluster that I want to send to the RT controller.  The problem is if I just the read/write FPGA, I only get 1/10th of the data.  I am assuming that it is the time taking to transfer from FPGA to RT.  So I looked into DMA FIFO.  When I use this, I do capture the data, but I notice jumps/shifts in the data.  It's almost like the buffer fills up, and then dumps it at the last sec. To reduce this, I went from 10 values from the cluster to 2, and there were not shifts/jumps.  The problem is I need all 10 values.  So I'm wondering if I need to increase my DMA FIFO or if there is another meathod I need to use like Shared Varibles? 
    Thanks,
    guilio

    The one thing I still don't understand is what you mean by this "jump" in your data. Can you be a bit more specific about this? What I mean by jump.  The elements coming from the FIFO in the RT controller will display on my array visual aid normally.  Then you will see the data jump. Here is an example.  I am bringing out 3 elements, X Y Z into an array.  I'll read lets say 9 elements at a time.  So my array, after I transform it into a 2D array for saving will look like this.  [x y z;x y z;x y z].  At some points in time, the array will jump and look like this. [x y z;z x y ;z x y].  Then it will go back to how I expect it to look [z x y;x y z;x y z].  Right now, I am reading ~3000 elements.  The elements remaining depends on what else is going on.  If the visual aids are deleted, then elements remaining will never max out and thus, I won't get jumps.  If I put visual aids in, then I get jumps because elements remaining maxes out.  This leads me to believe that the FIFO is filling up and since it can't release all the information fast enough, then it just dumps it and fill it up again.  The weird thing about it, It doesn't happen in a pattern (every 3000 elements).  I tried putting them in seperate loops, it still slows things down.  I tried queing/dequeing, still slows it down.  I need to be able to read all elements fo all 3 DMA transfers at once.   Maybe it will help us locate the problem.  Otherwise, your DMA FIFO looks OK.  Can you verify the number of times your data is written to the FIFO from the FPGA? When I put a clock in to measure loops speed, it was showing 8-9us.  I can't have this no more than 10us or I will missed data. I would like to see a definite count of this.  Then on the RT side, what does the "Elements Remaining" output look like?  As before, elements remaining depends on what else is going on.  If I just have the DMA FIFO's, then I can adjust the read elements number so all three don't max out, but as soon as I put something else in, it will max out.  Since this is a FIFO we should be able to keep very close track of elements in and out.
    Let me know if you need more info.  I know about the FXP point transfer, unfotunately, not all my FXP elements are the same data type.
    Thanks,
    guilio

  • Data Transfer Workbench can copy sales & purchase modules's data?

    Hi,
    Just wondering that,any way to copy the Sales & Purchase Modules's data from existing company to new company? Copy Express can't do it. But how about Data Transfer Workbench can copy?
    Thanks
    Regards,
    Danny
    Edited by: Danny Gan on Mar 18, 2008 5:21 AM

    Danny,
    How many field from the ORDR or RDR1 have to exported depends on the business logic implemented and what was being used.  No one can really provide a standard set of 5 or 10 fields as you know the clients business better than anyone in the Forum.
    If the Client used User fields then data from the user fields should also be exported.
    The base fields from ORDR would be CardCode, DocDate, DocDueDate, Transpcode (shipping code), SlpCode, GroupNum (Payment terms code), etc.
    Similarly on RDR1, ItemCode, Quantity, Price and any additional row level information that may pertain to their business
    Suda

  • The i pod module for data transfer

    Recently i put i pod into enable disk use function so i could use the i pod for data transfer i went into MY computer then located the i pod service module i went to put it onto my desktop but it said module is being used by another program but it isnt atall. When i put in data like my spreadsheets onto the module dragging them it doesnt appear on the i pod module in the i tunes why is that? What do i have to do to make sure i can see that the data is recieved onto the i pod thanks mr roberts

    Hi,
      Retrieve data file from presentation server(Upload from PC)
    DATA: i_file like rlgrap-filename value '/usr/sap/tmp/file.txt'. 
    DATA: begin of it_datatab occurs 0,
      row(500) type c,
    end of it_datatab.
      CALL FUNCTION <b>'GUI_UPLOAD'</b>
           EXPORTING
                filename        = i_file
                filetype        = 'ASC'
           TABLES
                data_tab        = it_datatab  "ITBL_IN_RECORD[]
           EXCEPTIONS
                file_open_error = 1
                OTHERS          = 2.
    Pls reward points.
    Regards,
    Ameet

  • Upgradation issue 4.6C to 4.7 for DP90-Data transfer module not allowed

    Hi All.
    We have a issue in upgradation project form 4.6C to 4.7.
    Process we follow is Contract,Notification,Service order,Debit memo request.
    While doing DP90 for service order we are facing a error as <b>Data transfer module is not allowed</b>
    Afetr this system gives you only option to EXIT.
    In the log display we get following message :
    Data transfer module is not allowed
    Message no. V1247
    Diagnosis
    A wrong data transfer module was entered in Customizing for copying control. When you enter a sales order with reference to a quotation, you cannot copy header data from an invoice, for example. The module number is 306.
    System Response
    Processing is terminated.
    Procedure
    Check the copying control for your transaction in Customizing or contact your system administrator.
    Any idea how to deal with this
    Regards,
    Amrish Purohit

    Hi Amrish,
    There is a standard SAP program: SDTXT1AID. Start it, and the upper section you can find your incomplete parameters. Sign your relevant parameters (or all) and F8
    (not test case).
    I hope it will help you.
    regards,
    Zsuzsa

  • Generating Offer Letter & Data Transfer in Recruitment Module

    Dear Gurus,
    1) We are implementing Recruitment module and i am quite new to it. Could any one of you suggest what are the sequence of steps required to maintain to make sure that a Standard offer letter is generated and how to mail it to the applicant while you perform Offer Applicant Contract activity. Kindly suggest how to move further to fulfill this requirement.
    Kindly note this is pure Recruitment module and  e recruitment is totally out of scope.
    2) How do you transfer applicant data to Personnel Adminstration module when the Applicant action Prepare for Hiring is run.
    Kindly suggest the sequence of steps to be followed to make sure the data transfer happens from recruitment to personnel adminstration.
    Kindly also let me know what are the mandatory switches that have to be activiated in T77S0 table to set up an integration between PA & RC.
    3)When you hire a external applicant whose data is already there in PB10 and when you hire him for suitable vacancy and do the data transfer to PA. The next time when i try searching in the applicant pool will the Applicant group change from External applicant to internal applicant in case of hired applicant. If yes how to go about it.
    Kindly throw some light on this. Please note we would like to go about the standard functionalities of SAP Recruitment module.
    If there is any challenge in fulfilling the above requirements, kindly point out the same so i can address the same to my client...
    Regards,
    Kiran

    Hi Ravi,
    You can do the initial data entry of the applicant through Tcode PB40 or PB10.  Once you have maintained the data, then you can do the applicant actions like given below through Tcode PB40.
    Enter additional data
    Reject applicant
    Put applicant on hold
    Process applicant
    Offer applicant contract
    Applicant rejects offer
    Change of org. assignment
    Further application
    Invite applicant.
    It is not compulsory to do all these actions, but these actions are useful to keep a track of applicants.
    Then you have to do applicant action prepare for hiring through PB40.  After doing prepare for hiring action you can transfer the applicant data to Personnel administration through Tcode PBA7.  Once the data is transferred personnel number is generated for applicant.  Then you have to complete the activities throug Tcode PBA8.
    Few tables for configuration of recruitment :
    T750D - for creating media for recruitment
    V_T750C - Recruitment instruments
    T750K - Applicant group
    V_T750F - Applicant range
    V_T751E - Applicant action type
    V_T588D - Infogroup for Applicant action
    Shrikant

  • External data Transfer - Loan Module (t.code - KCLJ)

    Hi Expert
    I have upload the loan source data through t.code KCLJ.  and it showing me External data transfer completed with return code: 4. there is no error but no loan is created in server and also tables is not updated. is there any further step to update a loan transaction through EDT. i  read SAP documentation here it's show that data saved in data pool. Please tell me what's the next step to update the loan data.
    Please  help
    Regards
    Gaurav Gupta

    Hi, how are you? My name is Santiago, I'm working in KCLJ trx. and I'm trying to upload my file, but I cant. Cuoud you fix your error? Do you have any documentation or an example of your file?
    thanks
    Santiago

  • Data transfer  from Access based programme to AR module

    We are using an access based program for cash collection, invoicing etc.An interface transferring the data end of the day to Oracle AR.
    But the interface not functioning properly. Sometimes data is not transferring, other wise wrong data is recording.Can you explain how oracle is working with other porgrammes.How I can integrate AR,PO and Inventory with Access program and ensure data transfer is correct.
    The below activities are integrated with Access Program.
    AR-Invoicing and cash collection by access program
    AP-Certain Purchase Order to be done automatically from the access program
    Inv-Stock Issue -Relating to sales is being done by Access from oracle inventory.

    Hi can you please let me know what are the concurrent request are used for transferring data from AR_PAYMENTS_INTERFACE_ALL into Oracle base tables.
    I am using Oracle 11.5.10.

  • MyRIO memory, data transfer and clock rate

    Hi
    I am trying to do some computations on a previously obtained file sampled at 100Msps using myRIO module. I have some doubts regarding the same. There are mainly two doubts, one regarding data transfer and other regarding clock rate. 
    1. Currently, I access my file (size 50 MB) from my development computer hard drive in FPGA through DMA FIFO, taking one block consisting of around 5500 points at a time. I have been running the VI in emulation mode for the time being. I was able to transfer through DMA from host, but it is very slow (i can see each point being transferred!!). The timer connected in while loop in FPGA says 2 ticks for each loop, but the data transfer is taking long. There could be two reasons for this, one being that the serial cable used is the problem, the DMA happens fast but the update as seen to the user is slower, the second being that the timer is not recording the time for data trasfer. Which one could be the reason?
    If I put the file in the myRIO module, I will have to compile it each and every time, but does it behave the same way as I did before with dev PC(will the DMA transfer be faster)? And here too, do I need to put the file in the USB stick? My MAX says that there is 293 MB of primary disk free space in the module. I am not able to see this space at all. If I put my file in this memory, will the data transfer be faster? That is, can I use any static memory in the board (>50MB) to put my file? or can I use any data transfer method other than FIFO? This forum (http://forums.ni.com/t5/Academic-Hardware-Products-ELVIS/myRIO-Compile-Error/td-p/2709721/highlight/... discusses this issue, but I would like to know the speed of the transfer too. 
    2. The data in the file is sampled at 100Msps. The filter blocks inside FPGA ask to specify the FPGA clock rate and sampling rate, i created a 200MHz derived clock and mentioned the same, gave sampling rate as 100Msps, but the filter is giving zero results. Do these blocks work with derived clock rates? or is it the property of SCTL alone?
    Thanks a lot
    Arya

    Hi Sam
    Thanks for the quick reply. I will keep the terminology in mind. I am trying analyse the data file (each of the 5500 samples corresponds to a single frame of data)  by doing some intensive signal processing algorithms on each frame, then average the results and disply it.
    I tried putting the file on the RT target, both using a USB stick and using the RT target internal memory. I thought I will write back the delay time for each loop after the transfer has occured completely, to a text tile in the system. I ran the code my making an exe for both the USB stick and RT target internal memory methods; and compiling using the FPGA emulater in the dev PC VI. (A screenshot of the last method is attached, the same is used for both the other methods with minor modifications. )To my surprise, all three of them gave 13 ms as the delay. I certainly expect the transfer from RT internal memory faster than USB and the one from the dev PC to be the slowest. I will work more on the same and try to figure out why this is happening so.
    When I transferred the data file (50MB) into the RT flash memory, the MAX shows 50MB decrease in the free physical memory but only 20MB decrease in the primary disk free space. Why is this so? Could you please tell me the differences between them? I did not get any useful online resources when I searched.
    Meanwhile, the other doubt still persists, is it possible to run filter blocks with the derived clock rates? Can we specify clock rates like 200MHz and sampling rates like 100Msps in the filter configuration window? I tried, but obtained zero results.
    Thanks and regards
    Arya
    Attachments:
    Dev PC VI.PNG ‏33 KB
    FPGA VI.PNG ‏16 KB
    Delay text file.PNG ‏4 KB

  • Saving portion of data acquired from FPGA and visualize it on HOST RT Waveform Graph

    Hi all,
    i'm using FPGA to acquire data @1 Mhz from a Custom module in a SCTL an place it in a block memory Target Scoped FIFO1 Block memory of 1023 elements.
    What i want to do is acquire a known waveform and and visualize acquired data just to check if data is corrupted or if samples are correctly representing waveform.
    I'm reading data from FIFO1 and placing in a MEMORY in another SCTL to not affect 1st SCTL.
    My intention in this is to create a one shot reading operation to place 1023 elements from FIFO1 to MEMORY (attached pic).
    After this i have another while loop that perform communication with the HOST to read data from MEMORY.
    data type is FXP signed 24 bits word with 24 bit integer part. 
    I cannot visualize anything on the host waveform graph.. Can you tell me why?
    I've attached FPGA vi and HOST vi.
    Thanks in advance.
    Attachments:
    1.png ‏60 KB
    TEST acquired waveform.vi ‏170 KB
    Host Testing Acquired Data.vi ‏112 KB

    Hi Mariano,
    I need your complete project in order to review your application. Anywhere, you could try to run this example, that shows you the best practice in order to achieve data integrity ensuring no underflow/overflow events in data transfer.
    In RT VI you can try to use Type Cast function in order to obtain DBL values from FXP ones.
    Look @ this white paper that allows you to use floating point in FPGA side.
    I hope that the above informations will help you in solve your issue.
    Best Regards.
    CLA_CUP

  • Problems with Photoshop performance and data transfer speed on iMac

    Two months ago, I started noticing slow performances using Photoshop (above all using clone stamp tool) on my 27" iMac (late 2012). I did the AHT and I found that 8GB of 32GB RAM were broken.
    I removed them but the problem didn't disappered, I also noticed that data transfer speed (both copy and paste from/to internal HD and from CF card/external HD) was really slow.
    I tried many solutions suggested by Apple support, none of them worked out. At the end, I tried uninstalling and re-installing Photoshop: no more problems!!!
    10 days ago, I received a new 8GB RAM module and so I installed it back... suddenly, the problem came back, I tried re-installing again Photoshop but the problem, this time, still persist!
    Does anyone had the same experience? All other CC programs work well (LR, AE, Premiere...)

    yes, it does!
    what seems to be very strange to me is how data trasnfer speed could be affected!
    (just to say, I've already tried reset of SMC and PRAM, I've tried with different accounts and I've also re-installed the OS, next step would be formatting the disk and installing the OS from zero)

  • Error 33172 occurred at Read & Write data transfer between two or more PF2010 controller

    Hi,i need to do data transfer between two or more FP2010 controller.e.g. FP2010(A) & FP2010(B).
    FP2010(A) need to transfer the measurement (from its I/O module) to FP2010(B) to do the data analysis.These data transfer should be synchronous btw two controller to prevent data lost.
    From the vi used in the attachment,i encountered some problems at:
    (1) Error 33172 occurred while publishing the data.Can i create and publish data under different item name?
    (2) How to synchronies the read & write btw contorller?
    All controller are communicating with each other directly without the need of a host computer to link them together
    Is there any other method to do fast data transfer betwe
    en controller?

    Hi YongNei,
    You were succesful in omiting enough information to make it very difficult to answer!
    Please post your example.
    Please tell us what version of LV-RT you are using.
    Please define what you concider "fast data transfer".
    Have you concidered mapping the FP tags of FP2010(A) to FP2010(B) and vise versa?
    WHat exactly has to be syncronized?
    If you have something that is close to working, share that.
    Well, that as far as I can go with the info you have provided. Depending on the details, what you are asking could be anything from trivial to impossible with the currently available technology. I just can't say.
    It would probably be a good idea to start over with a fresh question (sorry) because not many people are going to know what a a "
    PF2010" is and I can not guarentee that I will be able to get back to you personally until next week-end.
    Trying to help you get an answer,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Material returns in case of Inter company stock transfer

    Hello,
    In the inter company stock transfer, when material is  recived in the receving company code, it will get rejected in the quality check. what is process in the SAP to return back material to  supplying company code.
    Regards
    Manish

    Hi Manish,
    Here's the process (generally)
    1. (At Recv Plant)
    EITHER:   
             1- a) Create a Return Delivery w.r.t. the Material Doc that Goods were received (Opens the Order qty in PO) and do the PGI (Dlv Ty RLL)
    OR
             1-b) Create a Return PO (with Return Ind) and Return Delivery (does not open PO qty) and do PGI (Dlv Ty NCR)
    2. (At Supp Plant)
        Create Returns Order in SD and Returns Delivery (Dlv Ty LR) and do PGR
    NOTE: The Supp Plant must have a Vendor Master (maintained at CC level) assigned. And the Recv Plant must have Customer master and other Shipping data (SOrg, Ds Ch, Div) maintained.
    After the postings AP and AR accounts are updated to reflect the postings in each CoCd. Verify this in the Accouting Document created as a result of PGI and PGR.
    Let me know this helped.
    Regards,
    -Mewan

  • Inter company stock transfer process

    Dear all,
    For example company code-A is sending the material to company code-B and it is sold by company code-B.
    As per the process given by SAP for inter company stock transfer we are creating a  Purchase Order ( PO type - NB) at company code-B.  based on this PO only company code -A will create a Delivery document and based on this delivery document company code B will do the MIGO and MIRO.
    for executing  this process  while creating PO at company code -B  we need to assign a Shipping Plant from which Company code-A  will send the material. So, for getting this shipping tab in PO, we need to assign the sending Plant code to the Vendor master ( company code-A is the  Vendor in  Company code-B) .As per SAP  for one vendor only one plant code can be assigned.But in our scenario material can be sent from several plants.
    So in this case how  we can assign those plants to one vendor so that the shipping tab will be available in the PO.
    Is it require to create that many vendors in this scenario or there may be any possible solution give by SAP to process this kind of scenarios.
    Regards,
    Abdul Jabbar

    Hi Abdul,
    we can assign those plants to one vendor so that the shipping tab will be available in the PO.
    In standard SAP it is not possible to add more than one plant to single vendor master(XK01).System allows you to add only on plant to vendor master.
    purchasing data->extras->add.purchasing data.
    For getting shipping point in PO
    1>STO can be configured
    2>shipping point determination can be configured in right manner.
    In Inter company STO,create vendor master for each and every plant. And use this vendor,whenever creating PO from receiving plant.Based on the vendor,shipping point will be determined in to PO.why because here we are creating receiving plant as vendor in supplying plant sales area.
    Regards,
    Gangadhar A.

Maybe you are looking for

  • OBIEE 11g - Wrong values getting passed in Prompts

    Hi All, I am in 11.1.1.6.0 I have a report, lets say report 1 and following are its properties 1. it has two columns, one dimension (empname) and one fact (revenue) 2. on the revenue column, i have applied an action link which takes me to another das

  • Serializing objects to 8i

    Hi, This may have been asked before, but does anyone know if it's possible to serialize/de-serialize a Java object directly into an 8i table? If so, are there any code examples I could have a look at? Thanks in anticipation, Rich

  • Examples of using exit plug

    Hi there, I am launching a WD app via the portal UWL.  I'm using the action handler "SAPWebDynproABAPlauncher" and launching the app in a new window via the "launchinNewWindow" parameter. What I'm trying to achieve is being able exit this WD app and

  • IPod Touch 2.0 software unable to pay?

    i live in a country that does not have iTunes Store, so how am i supposed to pay for the 2.0 upgrade?

  • Can I merge 2 groups in Address Book?

    Hello, Running Mac OSX Snow Leopard. My Address Book app is syncing fine with an Exchange 2007 server, but now I've got 2 Address Book groups on my machine...one for the Exchange account and another called 'On My Mac'. They are almost identical...so