How to group measurement data into different groups using TDMs
Hi, frnds!
I have designed a labview program for measuring the power current characteristics of a laser diode. I want to save the current and power arrays measured at different temperatures in different groups. I tried to implement this but I was not successful. The current and power measurements at different temperatures are being saved only in the first group and the rest are being empty! Please help me save the measurements in different temperature groups.
I need this urgently!
Thanks in advance!
Kumar
Perhaps a snippet of the code you are using would help. When you write to the TDMS file with the low level functions, you just need to specify the separate group names. Have you read over the NI TDMS File Format, Introduction to LabVIEW TDM Streaming VIs, and Writing TDM and TDMS files articles? Have you tried any examples out of the Example Finder?
As a basic example:
Certified LabVIEW Developer
Similar Messages
-
I am running Lookout 5.0 and have recently purchased the Labview Report Generation Toolkit for Microsoft Office to create reports from my Lookout logged data. Since I have never used Labview I am having my problems. I tried following the tutorials but they do not seem to be examples of what I want to do.
I log rainfall totals (1 spreadsheet)in Lookout from 40 different sites in 5 minute increments. I copy these totals and paste them into another spreadsheet which sums them up to give me hourly totals and then paste those totals into a spreadsheet for distribution.
In Labview I create a new report and use the distribution sheet as my template, but how do I complete
the steps of loading the raw 5 minute data into labview and then paste it into the hourly total spreadsheet and then transfer those totals into the distribution template?
I have been trying to figure this out for over a week, and I am getting nowhere.
Any response would be appreciated.
Thanks
Jason P
Jason PhillipsLookout saves the files in .csv form which can be opened in Excel. I did make some progress by using the "append table to report" vi which allowed me to put values into an array and then those values were entered into my template on my report vi.
Where I am stuck now is I want to be able to put values into my template from a .csv file, not from an array I have to manually put numbers in.
Once those values are in my template I want to pull summed values from the template and place them into a final excel file for printing.
I have attached examples of the files I am working with to help you better understand what I am trying to do.
I hope that makes sense.
Jason Phillips
Attachments:
HourlyTotalsTemplate.xls 120 KB
eb_rain_gauge_ss.csv 23 KB
EastBankHourlyRainReport.xls 28 KB -
How to convert row data into columns without using pivot table?
Hello guys
I have a report that has several columns about sales call type and call counts
It looks like this:
Calls Type Call Counts
Missed 200
Handled 3000000
Rejected 40000
Dropped 50000
Now I wanna create a report that look like this:
Missed call counts Handled Call counts Rejected Counts Drop counts Other Columns
200 300000000 40000 50000 Data
So that I can perform other calculations on the difference and comparison of handled calls counts vs other call counts..
I know pivot table view can make the report look like in such way, but they cant do further calculations on that..
So I need to create new solid columns that capture call counts based on call types..
How would I be able to do that on Answers? I don't have access to the RPD, so is it possible to do it sololy on answers? Or should I just ask ETL support on this?
Any pointers will be deeply appreciated!
Thank youThanks MMA
I followed your guidance and I was able to create a few new columns of call missed count, call handled counts and call abandoned counts.. Then I create new columns of ave missed counts, ave handled counts and so forth..
Then I went to the pivot view, I realized there is a small problem.. Basically the report still includes the column "call types" which has 3 different types of call "miss, abandon and handled". When I exclude this column in my report, the rest of the measures will return wrong values. When I include this column in the report, it shows duplicate values by 3 rows. It looks like this:
Queue name Call types Call handled Call missed Call abondoned
A Miss 8 10 15
A Handled 8 10 15
A Abandoned 8 10 15
in pivot table view, if I move call type to column area, the resulted measures will become 8X3, 10X3 and 15X3
So is there a way to eliminate duplicate rows or let the system know not to mulitply by 3?
Or is this as far as presentation service can go in terms of this? Or should I advice to provide better data from the back end?
Please let me know, thanks -
How to convert XMLTYPE data into CLOB without using getclobval()
Please tell me how to convert data which is stored in the table in XMLTYPE column to a CLOB.
When i use getClobVal(), i get an error. So please tell me some other option except getClobVal()CREATE OR REPLACE PACKAGE BODY CONVERT_XML_TO_HTML AS
FUNCTION GENERATE_HTML(TABLE_NAME VARCHAR2, FILE_NAME VARCHAR2, STYLESHEET_QUERY VARCHAR2, WHERE_CLAUSE VARCHAR2, ORDERBY_CLAUSE VARCHAR2) RETURN CLOB IS
lHTMLOutput XMLType;
lXSL CLOB;
lXMLData XMLType;
FILEID UTL_FILE.FILE_TYPE;
HTML_RESULT CLOB;
SQL_QUERY VARCHAR2(300);
WHERE_QUERY VARCHAR2(200);
fileDirectory VARCHAR2(100);
slashPosition NUMBER;
actual_fileName VARCHAR2(100);
XML_HTML_REF_CUR_PT XML_HTML_REF_CUR;
BEGIN
IF WHERE_CLAUSE IS NOT NULL AND ORDERBY_CLAUSE IS NOT NULL THEN
SQL_QUERY := 'SELECT * FROM ' || TABLE_NAME ||' WHERE ' || WHERE_CLAUSE || ' ORDER BY ' || ORDERBY_CLAUSE;
ELSE IF WHERE_CLAUSE IS NOT NULL AND ORDERBY_CLAUSE IS NULL THEN
SQL_QUERY := 'SELECT * FROM ' || TABLE_NAME || ' WHERE ' || WHERE_CLAUSE;
ELSE IF WHERE_CLAUSE IS NULL AND ORDERBY_CLAUSE IS NOT NULL THEN
SQL_QUERY := 'SELECT * FROM ' || TABLE_NAME || ' ORDER BY ' || ORDERBY_CLAUSE;
ELSE IF WHERE_CLAUSE IS NULL AND ORDERBY_CLAUSE IS NULL THEN
SQL_QUERY := 'SELECT * FROM ' || TABLE_NAME;
END IF;
END IF;
END IF;
END IF;
OPEN XML_HTML_REF_CUR_PT FOR SQL_QUERY;
lXMLData := GENERATE_XML(XML_HTML_REF_CUR_PT);
--lXSL := GET_STYLESHEET(STYLESHEET_QUERY);
if(lXMLData is not null) then
dbms_output.put_line('lXMLData pass');
else
dbms_output.put_line('lXMLData fail');
end if;
lHTMLOutput := lXMLData.transform(XMLType(STYLESHEET_QUERY));
--INSERT INTO TEMP_CLOB_TAB2 VALUES(CLOB(lHTMLOutput));
if(lHTMLOutput is not null) then
dbms_output.put_line('lHTMLOutput pass');
else
dbms_output.put_line('lHTMLOutput fail');
end if;
HTML_RESULT := lHTMLOutput.getclobVal();
if(HTML_RESULT is not null) then
dbms_output.put_line('HTML_RESULT pass'||HTML_RESULT);
else
dbms_output.put_line('HTML_RESULT fail');
end if;
-- If the filename has been supplied ...
IF FILE_NAME IS NOT NULL THEN
-- locate the final '/' or '\' in the pathname ...
slashPosition := INSTR(FILE_NAME, '/', -1 );
IF slashPosition = 0 THEN
slashPosition := INSTR(FILE_NAME,'\', -1 );
END IF;
-- separate the filename from the directory name ...
fileDirectory := SUBSTR(FILE_NAME, 1,slashPosition - 1 );
actual_fileName := SUBSTR(FILE_NAME, slashPosition + 1 );
END IF;
DBMS_OUTPUT.PUT_LINE(fileDirectory||' ' ||actual_fileName);
FILEID := UTL_FILE.FOPEN(fileDirectory,actual_fileName, 'W');
UTL_FILE.PUT_LINE(FILEID, '<title> hi </title>');
UTL_FILE.PUT_LINE(FILEID, HTML_RESULT);
UTL_FILE.FCLOSE (FILEID);
DBMS_OUTPUT.PUT_LINE('CLOB SIZE'||DBMS_LOB.GETLENGTH(HTML_RESULT));
RETURN HTML_RESULT;
--RETURN lHTMLOutput;
EXCEPTION
WHEN OTHERS
THEN DBMS_OUTPUT.PUT_LINE('ERROR!!!!!!!!!!!!');
END GENERATE_HTML;
FUNCTION GENERATE_XML(XML_HTML_REF_CUR_PT XML_HTML_REF_CUR) RETURN XMLType IS
qryCtx DBMS_XMLGEN.ctxHandle;
result CLOB;
result1 xmltype;
BEGIN
qryCtx := DBMS_XMLGEN.newContext(XML_HTML_REF_CUR_PT);
result := DBMS_XMLGEN.getXML(qryCtx);
--dbms_output.put_line(result);
result1 := xmltype(result);
INSERT INTO temp_clob VALUES(result);
if(result1 is not null) then
dbms_output.put_line('pass');
else
dbms_output.put_line('fail');
end if;
return result1;
DBMS_XMLGEN.closeContext(qryCtx);
END GENERATE_XML;
END CONVERT_XML_TO_HTML;
This is the code which i am using to generate the XML and subsequently to generate the HTML output out of that using a XSL stylesheet.
The error is Numeric or value error.. -
How to display rows of data into different columns?
I'm new to SQL and currently this is what I'm trying to do:
Display multiple rows of data into different columns within the same row
I have a table like this:
CREATE TABLE TRIPLEG(
T# NUMBER(10) NOT NULL,
LEG# NUMBER(2) NOT NULL,
DEPARTURE VARCHAR(30) NOT NULL,
DESTINATION VARCHAR(30) NOT NULL,
CONSTRAINT TRIPLEG_PKEY PRIMARY KEY (T#, LEG#),
CONSTRAINT TRIPLEG_UNIQUE UNIQUE(T#, DEPARTURE, DESTINATION),
CONSTRAINT TRIPLEG_FKEY1 FOREIGN KEY (T#) REFERENCES TRIP(T#) );
INSERT INTO TRIPLEG VALUES( 1, 1, 'Sydney', 'Melbourne');
INSERT INTO TRIPLEG VALUES( 1, 2, 'Melbourne', 'Adelaide');
The result should be something like this:
> T# | ORIGIN | DESTINATION1 | DESTINATION2
> 1 | SYDNEY | MELBORUNE | ADELAIDE
The query should include the `COUNT(T#) < 3` since I only need to display the records less than 3. How can I achieve the results that I want using relational views???
Thanks!!!T#
LEG#
DEPARTURE
DESTINATION
1
1
Sydney
Melbourne
1
2
Melbourne
Adelaide
1
3
Adelaide
India
1
4
India
Dubai
2
1
India
UAE
2
2
UAE
Germany
2
3
Germany
USA
On 11gr2, you may use this :
SELECT t#,
REGEXP_REPLACE (
LISTAGG (departure || '->' || destination, ' ')
WITHIN GROUP (ORDER BY t#, leg#),
'([^ ]+) \1+',
'\1')
FROM tripleg
where leg#<=3
GROUP BY t#;
Output:
1 Sydney->Melbourne->Adelaide->India
2 India->UAE->Germany->USA
Cheers,
Manik. -
Displaying a group of data in different colums
I have a problem with displaying a group of data in different colums. I want to display a group of data like this:
Column 1 --- Column2 ----- Column3
data1 data6 data11
data2 data7 data12
data3 data8 data13
data4 data9 data14
data5 data10 data15
That is, the coulm headers must be at the same height of the page and data must be in paralell columns.
My number of data is variable depending on a query result, and I want to start displaying my group on the first column and when it is full (the number of records per column is fixed), is must switch into the next one.
In case there were more than 15 records, the 16th and the followings, must be displayed on the next page, with the same format as i have explained before.
Thank you very much.Send me all files along with expected output at [email protected]
-
How to run the Invoices in different groups within the batch in AP
Need to know that how to run the Invoices in different groups within the batch. This will be of very helpful when we deal with lot of lines under one batch. (e.g) like somewhere we issued a Corporate Card to all the employees Via Bank Of America. Every Month they will send the complete details of all the employees who ever swiped the corporate provided BOA. Accordding to Natco all those lines should be loaded as One Invoice so that a Single Payment can be provided to BOA and it will also makes their life easier. This standard program sometimes it works normal sometimes it will run like a TORTOISE. So thats why in the manual they suggested to use GROUP ID to split the Invoice Load.
So plz tell me how we can run it
plz give me the solution
thankscan you give me some material or document on that so that i can read actuaaly i need to make doc on it..
-
Displaying a group of data in different Pages
Hello
I will try to explain my Problem below briefly
I have a problem with displaying a group of data in different Pages.I want to display a group of data like this:
page1 page2
data1Part1 data1Part2
Page3 Page4
data2Part1 data2Part2
Page5 Page6
data3Part1 data3Part2
page 7 Page 8
data4Part1 data4Part2
What I get is :
page1 page2
data1Part1 data2Part1
Page3 b]Page4
data3Part1 data4Part1
Page5 Page6
data2Part2 data3Part2
page 7 Page 8
data4Part2 data4Part2
I test <?for-each-group@section:ROW?> and Different first page etc. It doesn't work.I would appreciate your help. I can send you the output, template and xml doc, if you can have a look at it.
ThanksSend me all files along with expected output at [email protected]
-
Can we send the data into different data target from single datasource how
Hai
can we send the data into different data target from single datasource how ?Hi,
Create the transformation for each target and connect through DTP if you are in BI 7.0
If it is BW 3.5 create transfer rules and load it Info source and then different update rules for different targets then load it using IP.
If you are speaking about loading data from R3 data source to multiple data sources in BI
Then follow the below step.
1)create init IP's and run the IP's with different selection to different DSO(With same selection it is not possible).
2)Then you will have different delta queues for the same data source in RSA7
3)Your delta loads will run fine to different data sources from same DS in R3.
Hope this helps
Regards,
Venkatesh -
How to store measurement data in a single database row
I have to store time-data series into a database and do some analysis using Matlab later. Therefore the question might be more a database question rather than Diadem itself. Nevertheless I'm interested if anyone has some best practices for this issue.
I have a database which holds lifecycle records for certain components of same nature and different variants. Depending on the variant I have test setups which record certain data. There is a common set of component properties and a varying number of environmental measurements to be included. Also the duration of data acquisition varies across the variants.
Therefore having tables appears to be non-optimal solution for storing the data because the needed number of columns is unknown. Additionally I cannot create individual tables for each sample of a variant. This would produce to many tables.
So there are different approaches I have thought of:
Saving the TDM and TDX files as text respectively as BLOB
This makes it necessary to use intermediate files.
Saving the data as XML text
I don’t know yet if I can process XML data in Matlab
Has anybody an advice on that problem?
Regards
ChrisChris
Sorry for the lateness in replying to your post.
I have done quite a bit of using a Database to store test results. (In my case this was Oracle DB, using a system called ASAM-ODS)
My 2 Cents:
Three functions were needed by users for me. 1) To search and find the tests, and 2) To take the list of Tests and process the data into a report/output summary. 2) If the file size is large, then being able to import the data quickly into analysis tool speeds up processing.
1) Searching for test results. This all depends on what parameters are of value for searching. In practice this is a smaller list of values(usually under 20), but I have had great difficulty getting users to agree on what these parameters are. They tend to want to search for anything. The organization of the searching parameters has direct relationship to how you can search. The NI Datafinder does a nice job of searching for parameters, so long as the parameter data are stored in properties of Channel Groups or Channels. It does not search or index if the values are in channel values.
Another note: Given these are searchable parameters, it helps greatly if these parameters have a controlled entry, so that the parameters are consistent over all tests, and not dependent on free form entry by each operator. Searching becomes impossible if the operators enter dates/ names in wildly different formats.
2) Similar issue exists if put values into databases. (I will use the database terms of Table and column(Parameter) and Row (instance of Data that would be one test record.)
The sql select statement, can do fast finds, if store the searchable parameters in rows of a table. Where would have one row for each Test Record. The files I worked with have more than 2000 parameters. Making a table that would store all of these, and be able to search for all of these, makes a very large number of rows. I did not like this approach, as it has substantial maintenance time, and when changes are made, things get ugly quick.
3)This is where having the file format be something that the analysis tool can quickly load up is beneficial. Especially if the data files are large. In DIAdem's case, it reads TDM,TDMS files very quickly into the dataportal. It can also read in the MDF or HDF files, but these are hierarchical structures that require custom code to traverse the information, and get the information into dataportal to use in Analysis /reporting. (Takes more time to read data, but have much more flexibility in the data structure than the two level tdm/tdms format.)
My personal preferences
I would not want to put the test data into a Table row. Each of the columns would be fixed and the table would be very wide in columns.
>
I personally like to put the test Data into a file, like TDMS, MDF, or HDF and then the database would have a entry for the reference to the attachment. The values that are in the database is just the parameters that are used for test Searching, either in datafinder or in sql commands in the user interface.
Hopefully these comments help your tasks some.
Respectfully,
Paul
tdmsmith.com -
Split data into different fields in TR
I have a flat file with space (multiple spaces between different fields) as a delimiter. The problem is, file is coming from 3rd party and they don't want to change the separator as comma or tab delimited CSV file. I have to load data in ODS (BW 3x).
Now I am thinking to load line by line and then split data into different objects in Transfer rules.
The Records looks like:
*009785499 ssss BC sssss 2988 ssss 244 sss 772 sss 200
*000000033 ssss AB ssss 0 ssss 0 ssss 0 ssss 0
*000004533 ssss EE ssss 8 ssss 3 ssss 2 ssss 4
s = space
Now I want data to split like:
Field1 = 009785499
Field2 = BC
Field3 = 2988
Field4 = 244
Field5 = 772
Field6 = 200
After 1st line load, go to 2nd line and split the data as above and so on. Could you help me with the code pleaseu2026?
Is it a good design to load data? Any other idea?
I appreciate your helps..Hi,
Not sure how efficient this is, but you can try an approach on the lines of this link /people/sap.user72/blog/2006/05/27/long-texts-in-sap-bw-modeling
Make your transfer structure in this format. Say the length of each line is 200 characters. Make the first field of the structure of length 200. That is, the length of Field1 in the Trans Struc will be 200.
The second field can be the length of Field2 as you need in your ODS, and similarly for Field3 to Field6. Load it as a CSV file. Since there are no commas, the entire line will enter into the first field of the Trans Structure. This can be broken up into individual fields in the Transfer Rules.
Now, in your Start Routine of transfer rules, write code like this (similar to the ex in the blog):
Field-symbols <fs> type transfer-structure.
Loop at Datapak assigning <fs>.
split <fs>-Field1 at 'ssss' into <fs>-field1 <fs>-field2 <fs>-field3....
modify datapak from <fs>
endloop.
Now you can assign Field1 of Trans Struc to Field1 of Comm Struc, Field2 of Trans Struc to Field2 of Comm Struc and so on.
Hope it helps!
Edited by: Suhas Karnik on Jun 17, 2008 10:28 PM -
How to put measures correctly into spreadsheet
Hi expert,
I loaded data into a planning model, and I can see data in BW and BEX, but when I put 'ACCOUNT' into row, and 'TIME' into column, I can't see 'measures' in spreadsheet, eventhough I drag 'measures' into spreadsheet from EPM pane, then update, getting blank for 'measures', it is gone. please tell me how to put measures correctly into spreadsheet.P.S If you are creating report by drag n drop on EPM pan, then same way need to add measure dimension in desired axis.
Share the screen shot.
Shrikant -
SegregatingTable data into different bucket with equal numebr of
Hi Guys,
I wanted to process table data in chunks not all the rows at a time. How can we achieve this in oracle.
Example :
I have one table EMP which has ten thousands rows(10,000) rows. Now these ten thousands rows are joined with other tables to process data and it takes time. So I wanted to pass one thousands rows at a time as an input to the procedures so that processing should happen only with 1 thousands rows at a time. This table does not have any primary key. So it there any method in oracle using that I can segregating the tables data into different buckets with equal number of rows.
I have used DBMS_PARALLEL_EXECUTE but its taking lots of time. Any help will be appreciated.I have one table EMP which has ten thousands rows(10,000) rows. Now these ten thousands rows are joined with other tables to process data and it takes time.
OK... So this is your actual problem. And the solution you are trying to come up with does not sound promising. So lets take a step back and re-think our strategy here. First thing would be how about we see some code? Show us your code that you say is running slow. And we can take it from there. The number 10,000 is a very small number. And if that is causing performance issue then you are in some big trouble.
And also it wouldn't hurt if you could read this Re: 3. How to improve the performance of my query? / My query is running slow. -
How to put the data into cache and distribute to nodeusing oracle coherence
Hi Friends,
i am having some random number data writing into file,from that file i am reading the data and i want to put into cache,how can i put the data into cache and partition this data into different nodes ( machines) to caluculate like S.D,variance..etc..like that.(or how can i implement montecarlo using oracle coherence) if any one know plz suggest me with flow.
Thank you.
regards
chandraHi robert,
i have some bulk data in some arraylist or object format,i want to put into cache.
i am not able to put into cache.i am using put method like cache.put(object key ,object value) ,but its not allowing to put into cache.
can you please help me.i m sending my code.plz go thru and tel me whr i did mistake.
package lab3;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
import com.tangosol.net.cache.NearCache;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.Scanner;
import javax.naming.Name;
public class BlockScoleData {
* @param args
* s=The spot market price
* x=the exercise price of the option
* v=instantaneous standard deviation of s
* r=risk free instantaneous rate of interest
* t= time to expiration of the option
* n – Number of MC simulations.
private static String outputFile = "D:/cache1/sampledata2.txt";
private static String inputFile = "D:/cache1/sampledata2.txt";
NearCache cache;
List<Credit> creditList = new ArrayList<Credit>();
public void writeToFile(int noofsamples) {
Random rnd = new Random();
PrintWriter writer = null;
try {
writer = new PrintWriter(outputFile);
for (int i = 1; i <= noofsamples; i++) {
double s = rnd.nextInt(200) * rnd.nextDouble();
//double x = rnd.nextInt(250) * rnd.nextDouble();
int t = rnd.nextInt(5);
double v = rnd.nextDouble() ;
double r = rnd.nextDouble() / 10;
//int n = rnd.nextInt(90000);
writer.println(s + " " + t + " " + v + " "
+ r );
} catch (FileNotFoundException e) {
e.printStackTrace();
} finally {
writer.close();
writer = null;
public List<Credit> readFromFile() {
Scanner scanner = null;
Credit credit = null;
// List<Credit> creditList = new ArrayList<Credit>();
try {
scanner = new Scanner(new File(inputFile));
while (scanner.hasNext()) {
credit = new Credit(scanner.nextDouble(), scanner.nextInt(),
scanner.nextDouble(), scanner.nextDouble());
creditList.add(credit);
System.out.println("read the list from file:"+creditList);
} catch (FileNotFoundException e) {
e.printStackTrace();
} finally {
scanner.close();
credit = null;
scanner = null;
return creditList;
// public void putCache(String cachename,List<Credit> list){
// cache = CacheFactory.getCache ( "VirtualCache");
// List<Credit> rand = new ArrayList<Credit>();
public Object put(Object key, Object value){
cache = (NearCache)CacheFactory.getCache("mycache");
String cachename = cache.getCacheName();
List<Credit> cachelist=new ArrayList<Credit>();
// Object key;
//cachelist = (List<Credit>)cache.put(creditList,creditList);
cache.put(creditList,creditList);
System.out.println("read to the cache list from file:"+cache.get(creditList));
return cachelist;
public static void main(String[] args) throws Exception {
NearCache cache = (NearCache)CacheFactory.getCache("mycache");
new BlockScoleData().writeToFile(20);
//new BlockScoleData().putCache("Name",);
System.out
.println("New file \"myfile.csv\" has been created to the current directory");
CacheFactory.ensureCluster();
new BlockScoleData().readFromFile();
System.out.println("data read from file successfully");
List<Credit> creditList = new ArrayList<Credit>();
new BlockScoleData().put(creditList,creditList);
System.out.println("read to the cache list from file:"+cache.get(creditList));
//cache=CacheFactory.getCache("mycache");
//mycacheput("Name",new BlockScoleData());
// System.out.println("name of cache is :" +mycache.getCacheName());
// System.out.println("value in cache is :" +mycache.get("Name"));
// System.out.println("cache services are :" +mycache.getCacheService());
regards
chandra -
How to uplaod the data into transaction IA01
Hi All,
I have tried to upload the data into transaction IA01 using direct input object 0470.
But one structure for Service Package Overview tab is missing.Can anyone suggest on how we proceed about it,like is there any BAPI to upload the data.Because there are almost 150 fields that need to be uploaded into the transaction which has different tabs in it so it would be very difficult to handle with BDC's.
Thanks and Regards,
ShilpaHi All,
I have tried to upload the data into transaction IA01 using direct input object 0470.
But one structure for Service Package Overview tab is missing.Can anyone suggest on how we proceed about it,like is there any BAPI to upload the data.Because there are almost 150 fields that need to be uploaded into the transaction which has different tabs in it so it would be very difficult to handle with BDC's.
Thanks and Regards,
Shilpa
Maybe you are looking for
-
Bluetooth module part number for pavilion dv7 4080us ?
I have a pavilion dv7 4080us. Its a great machine but am surprised not to find bluetooth. I see other machines have it and assume it is a part. I have installed bluetooth modules in other laptops pretty easily - but I cannot find a part number for
-
Uses for the Action Property with SQL and Oracle DB Adapters
This thread is a complement to the Wiki Article BizTalk: Streamlining WCF SQL and Oracle Messaging-Only and Other Patterns The question. In what circumstances is explicitly setting the Action value with the WCF SQL and Oracle DB bindings useful or b
-
Nokia Suite and Nokia 701 Cable Driver Problems
Hi, I just bought a Nokia 701, and so far I love it, the only real problem I have is that I can't connect it with Nokia Suite through the provided cable, because the driver never get installed properly, everything OBEX fails to install. So the only r
-
Anyone here using Kona LHe card in PCIe G5?
I just ordered a Kona LHe card and look forward to posting my experience with it on my Quad G5. A quick search of the forum didn't bring up much about the new PCI Express LHe version although I'm sure some of you are using it. I'm not immediately int
-
Need a way to burn commercial DVD's to iPOD.
After I purchased the MPEG-2 option I disovered that QT7 won't play back CSS encoded DVD's. Is there a way to get commercial DVD's to iPOD? Thanks.