Essbase 9.3.3 - data export files
Hi
I know when creating exports in Essbase when a file reaches 2gb Essbase will automatically generate a new file of the same and append the extension _1 to the name. I'm trying to find out if there is a way to control where the split happens. At the moment when I open the second file it starts halfway through a data line and then the header row appears about 80 rows down into the file. This means I get issues when trying to load this file into another cube using a load rule.
These are the settings I have at the top of my export file.
SET LOCKBLOCK HIGH;
SET UPDATECALC OFF;
SET AGGMISSG ON;
SET NOTICE DEFAULT;
SET MSG SUMMARY;
SET CACHE HIGH;
SET CALCPARALLEL 3;
SET EMPTYMEMBERSETS ON;
SET CREATEBLOCKONEQ OFF;
SET CREATENONMISSINGBLK OFF;
SET FRMLBOTTOMUP ON;
SET DATAEXPORTOPTIONS
DataExportLevel LEVEL0;
DataExportDynamicCalc OFF;
DataExportDecimal 6;
DataExportPrecision 16;
DataExportColFormat ON;
DataExportColHeader "Synapse Pipeline Stage";
DataExportDimHeader ON;
DataExportRelationalFile OFF;
DataExportOverwriteFile ON;
DataExportDryRun OFF;
Thanks in advance for any help.
Could you not break up the export into multiple exports to bring the file size down to less than 2GB, then load the multiple exports into your target database.
Cheers
John
http://john-goodwin.blogspot.com/
Similar Messages
-
At the moment it shows Microsoft Word or Ecel (97 - 2003), how i can change the file type name in Microsoft Word Excel (97 - 2010).
It's also possible to change the order of the file types. I would like default *.pdf an not *.rptHi,
The Excel export driver only goes to 2003 but Excel 2010 works fine with it. Once Business Objects updates the driver for Excel 2010 sometime in the future the description will change.
In terms of order of the drivers, the description you see is from inside the driver itself. They are listed alphabetically and there's no real way to change the order of them.
You can remove the formats by removing the driver from your machine but not to reorganize them.
Thanks,
Brian -
Matching import and export files
Is there a way to pair up imported files with the exported .Dat files and .err error files? We are running the batch process (using upsShell.exe executed via a scheduled batch script) to import CSV files and export them to Essbase, and have had several of the .Dat files fail. Nothing is recorded in the <username>.err log, the only indication of failures are the numerous .err files that accompany the .Dat export files in the Outbox.
Since we have loaded numerous files against the same location (which is the prefix for the .Dat and .err files), we have numerous .err files which have very similar names. For example:
LOCATION001.Err
LOCATION002.Err
LOCATION003.Err
Etc...
But there is no indication as to which of the imported data files these error files actually relate to! For example:
A~LOCATION~WLCAT~PERIOD1~RA.csv
B~LOCATION~WLCAT~PERIOD1~RA.csv
C~LOCATION~WLCAT~PERIOD1~RA.csv
Etc...
Is there a way to identify which .Err file relates to which .CSV file?I wouldn't say that the .dat file is completely independent of the source file, since the .dat file essentially contains mapped data directly or indirectly from the source file, and wouldn't be generated without it.
It is a shame that FDM does not generate some sort of useful batch processing log which details steps like this, for example:
2012:08:02 13:41 Import: A~LOCATION~WLCAT~PERIOD1~RA.csv
2012:08:02 13:42 Validate: Imported data
2012:08:02 13:43 Export: LOCATION001.Err
2012:08:02 13:43 Check: etc
This way, one would clearly be able to match problematic .Dat files with the original source file, and then look at fixing the source file if this was the root of the problem. I cannot see at present how one is supposed to do this when faced with many load files and error files.
Is there anything else I might have missed here? -
Genralizing the data export script
hi , i have a data export script which is a busienss rule , now i can genralize the version, year, scenario and everything but the problem is that the export file that is being created has the fixed name , each time i run that rule the data export file will be same , is it possible to somehow genralize that export file name as well.
hi here is some java which i have used earlier try to modify ...it saves with current date and time and even i used SED all this in UNIX
JAVA
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.DataInputStream;
import java.io.File;
import java.io.FileFilter;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileWriter;
import java.io.InputStreamReader;
import java.io.Writer;
import java.util.Calendar;
import java.util.StringTokenizer;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
* @author KVC
public class MigratorUtil {
public static File getLatestFile(String dir){
File directory = new File(dir);
File choice = null;
if(directory.isDirectory()){
File[] files = directory.listFiles(new FileFilter() {
public boolean accept(File file) {
return file.isFile();
long lastMod = Long.MIN_VALUE;
int fileSize = files.length;
for(int i=0;i<fileSize;i++){
File file = files;
if(file.lastModified() >lastMod){
choice = file;
lastMod = file.lastModified();
}else{
System.out.println(dir+" is not a directory");
return choice;
public static boolean processFile(File latestFile) throws FileNotFoundException,Exception{
FileInputStream fileStream = new FileInputStream(latestFile);
DataInputStream in = new DataInputStream(fileStream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strLine;
int lineCount = 0;
StringBuffer contents = new StringBuffer();
while((strLine = br.readLine())!=null){
if(lineCount == 0){ //first line
String header = System.getProperty("header");
if(header == null){
header = "HEADERHYPERION";
contents.append(header).append(getPreviousBusinessDate()).append(getDateFormat()).append("\n");
}else{
contents.append(strLine).append("\n");
lineCount++;
//footer
if(lineCount != 0){ //last line
String footer = System.getProperty("header");
if(footer == null){
footer = "TRAILER";
contents.append(footer).append(lineCount-1); // linecount - 1 to remove the first line count
// wtite the file
String fileName = latestFile.getAbsoluteFile().toString();
String outputFile = fileName.substring(0,fileName.indexOf("."))+".out";
System.out.println(" output file is ..."+outputFile);
File output = new File(outputFile);
Writer oWriter = new BufferedWriter(new FileWriter(output));
try{
oWriter.write(contents.toString());
// oWriter.write(getProcessedLine(contents.toString()));
}finally{
oWriter.close();
return true;
public static String getDateFormat(){
Calendar calendar = Calendar.getInstance();
int currentMonth = calendar.get(Calendar.MONTH);
return calendar.get(Calendar.YEAR)+"-"+(currentMonth>9?""+currentMonth:"0"+currentMonth)+"-01";
public static String getPreviousBusinessDate(){
Calendar calendar = Calendar.getInstance();
int currentMonth = calendar.get(Calendar.MONTH);
calendar.set(Calendar.MONTH, currentMonth-1);
int lastDate = calendar.getActualMaximum(Calendar.DATE);
calendar.set(Calendar.DATE, lastDate);
int lastDay = calendar.get(Calendar.DAY_OF_WEEK);
if(lastDay == 1 ){
lastDate = lastDate - 2; // for sunday
}else if(lastDay == 7){
lastDate = lastDate - 1; // for saturday
return calendar.get(Calendar.YEAR)+"-"+(currentMonth>9?""+currentMonth:"0"+currentMonth)+"-"+lastDate;
private static String getProcessedLine(String line){
String seperator = System.getProperty("inputseperator");
String out_seperator = System.getProperty("outputseperator");
if(seperator == null){
seperator = "!";
if(out_seperator == null){
out_seperator = "|";
StringTokenizer tokenizer = new StringTokenizer(line,seperator);
StringBuffer descContent = new StringBuffer();
StringBuffer content = new StringBuffer();
while(tokenizer.hasMoreTokens()){
String element = tokenizer.nextToken();
if(matchPattern(element)){
System.out.println("Criteria matched..."+element+ "So eat the next elemet");
descContent.append(tokenizer.nextElement()).append(out_seperator);
}else{
content.append(element).append(out_seperator);
content.append(descContent);
String output = content.toString();
return output.substring(0, output.length()-1);
private static boolean matchPattern(String line){
String regex = "\\d{1,2}.\\d{1,2}.\\d{1,2}";
Pattern pattern = Pattern.compile(regex);
Matcher m = pattern.matcher(line);
return (m.matches());
public static void main(String a[]){
System.out.println(getPreviousBusinessDate());
SED
for file to
#!/bin/bash
#Replace tab with pipe
cat $1 | sed 's/\t/|/g' > /tmp/test.out
line_cnt=`wc -l $1 | awk '{print expr $1-2}'`
if [ `uname -s` = 'SunOS' ]; then
set -A months 0 1 2 3 4 5 6 7 8 9 10 11 12
else
months=(0 1 2 3 4 5 6 7 8 9 10 11 12)
fi
here it takes last month date similarly u can change up to ur requirements
YEAR="`date +%Y`"
MONTH="${months[`date +%m-1`]}"
TODAY_STR="`date +%Y`-${months[`date +%m`]}-01"
DAY="`cal $MONTH $YEAR | awk '{ if(NF>1) a=$NF ; if (NF==7) a=$6}END{print a}'`"
LAST_MNTH="`date +%Y`-${months[`date +%m-1`]}-$DAY"
cat /tmp/test.out | sed -e "s/HEADERHYPERION/HEADERHYPERION${LAST_MNTH}${TODAY_STR}/" > /tmp/test_tmp.out
cat /tmp/test_tmp.out | sed -e "s/TRAILER/TRAILER${line_cnt}/" > $2 -
Essbase Data Export not Overwriting existing data file
We have an ODI interface in our environment which is used to export the data from Essbase apps to text files using Data export calc scripts and then we load those text files in a relational database. Laetely we are seeing some issue where the Data Export calc script is not overwriting the file and is just appending the new data to the existing file.
The OverWriteFile option is set to ON.
SET DATAEXPORTOPTIONS {
DataExportLevel "Level0";
DataExportOverWriteFile ON;
DataExportDimHeader ON;
DataExportColHeader "Period";
DataExportDynamicCalc ON;
The "Scenario" variable is a substitution variable which is set during the runtime. We are trying to extract "Budget" but the calc script is not clearing the "Actual" scenario from the text file which was the scenario that was extracted earlier. Its like after the execution of the calc script, the file contains both "Actual" and "Budget" data. We are not able to find the root cause as in why this might be happening and why OVERWRITEFILE command is not being taken into account by the data export calc script.
We have also deleted the text data file to make sure there are no temporary files on the server or anything. But when we ran the data export directly from Essbase again, then again the file contained both "Actual" as well as "Budget" data which really strange. We have never encountered an issue like this before.
Any suggestions regarding this issue?Did some more testing and pretty much zoomed on the issue. Our Scenario is actually something like this "Q1FCST-Budget", "Q2FCST-Budget" etc
This is the reason why we need to use a member function because Calc Script reads "&ODI_SCENARIO" (which is set to Q2FCST-Budget) as a number and gives an error. To convert this value to a string we are using @member function. And, this seems the root cause of the issue. The ODI_Scenario variable is set to "Q2FCST-Budget", but when we run the script with this calculation function @member("&ODI_SCENARIO"), the data file brings back the values for "Q1FCST-Budget" out of nowhere in addition to "Q2FCST-Budget" data which we are trying to extract.
Successful Test Case 1:
1) Put Scenario "Q2FCST-Budget" in hard coded letters in Script and ran the script
e.g "Q2FCST-Phased"
2) Ran the Script
3) Result Ok.Script overwrote the file with Q2FCST-Budget data
Successful Case 2:
1) Put scenario in @member function
e.g. @member("Q2FCST-Budget")
2) Results again ok
Failed Case:
1) Deleted the file
2) Put scenario in a substitution variable and used the member function "@member("&ODI_Scenario") and Ran the script . *ODI_SCENARIO is set to Q@FCST-Budget in Essbase variables.
e.g. @member("&ODI_SCENARIO")
3) Result : Script contained both "Q1FCST-Budget" as well as "Q2FCST-Budget" data values in the text file.
We are still not close to the root cause and why is this issue happening. Putting the sub var in the member function changes the complete picture and gives us inaccurate results.
Any clues anyone? -
Data Export to Oracle table from essbase issue
Hello,
i am using a data export calc script to load data from essbase into an oracle table. We have Essbase 11.1.2.1 on windows 2008 64 bit R2 server. I have an ODBC system DSN created for this job.
However, when i launch this process i get a message in the log "Cannot read SQL driver name for [Backup Exec Catalogs] from [ODBC.INI]"
i have checked the ODBC.ini file also in C:\Windows directory and that file contains the connection entries...
Any thoughts as to why i am getting this error in one server whereas the same process on SIT server is working fine.
thanks...please restart the application and try again . Check the application log if any details are there
-
Essbase & Planning Data Exports
Hi,
Wanted to know the different ways of exporting data out of Essbase & Planning Apps to load into other relational database systems.
For pure Essbase apps, report scripts is one option. Please add any other methods you have used.
By Planning app, I mean I need to export both Essbase numeric data as well as planning related data such as Supporting detail, Cell Texts, Textual information.
Are there any advanced tools?
Appreciate your thoughts.
Thanks,
- Ethan.Well, LCM comes to mind as it does all of the above.
Star Analytics Server may do much of what you want (it certainly does Essbase data exports, I don't know about the rest).
You could write SQL extracts for much of the dimensionality -- I have blogged almost all of these. The OutlineLoad.cmd tool in 11.1.2.2.300 form will export just about everything (not so sure about cell text, SD, etc.), and now it will do that to SQL. John Goodwin wrote about this a few months ago on his blog.
If you are looking for a fast export of targetted BSO data, take a look at MDX. I recently wrote about an undocumented (for now) MDX keyword -- NONEMPTYBLOCKS -- that makes level zero suppress #Missing data extracts really, really, really fast:
http://camerons-blog-for-essbase-hackers.blogspot.com/2013/01/the-fastest-way-to-export-targeted-data.html
Regards,
Cameron Lackpour
P.S. I should note that MDX only writes to text files and it's kind of ugly when it does it. The speed of that command may override any sense of offended aesthetics.
Edited by: CL on Jan 22, 2013 10:54 AM -
The export file from a calc script - naming and date/time stamp
Here is a simple calc script to export data.
2 questions:
1. Is there an easy way to add a date/time stamp on the name of the data file so that it does not overwrite the older ones every time?
2. I tried to specify the path so that it write to another server. "\\mfldappp011\E:\Exp_AW.txt". It's not working. How should I specify the path ?
Fix (@Relative("Yeartotal",0),"Actual","Working",&ActualYear);
Dataexport "file" "," "C:\Exp_AW.txt" "#MI";
EndFix;
Edited by: user9959627 on Sep 7, 2012 11:25 AMProbably easiest to call the maxl script from a command line script, then rename the exported file to include the tme stamp and copy/move it to a location.
Cheers
John
http://john-goodwin.blogspot.com/ -
Append date to exporting file name in SQL reporting service
When exporting a report to another format, say excel,PDF; the file name is always set to the report name. Our client has a requirement where whenever a user exports a report to pdf, the timestamp of when the data of the report was made should be appended in the filename. Is there a way to do this in Reporting Services as well as report builder?
example : Report name : Testreport
Exported file should be : Testreport-November-22-20076.pdf
please help me in this
Thanks
sukuHi,
I know it's been while since this question was posted. I am replying so that it'll be useful for other people when they come across this same situation.
If you have access to SQL Server Agent to create jobs then this idea will be helpful -
1. Connect to your Report Server from SSMS.
2. Note the ItemID for your report by executing this query
select *
from ReportServer..Catalog
where Path like '%NameofyourReport%'
3. Create a job in here with the name say "AppendDatetoxyzreport" and add a step in it with the code
UPDATE [Catalog]
SET [Path] = '/PathofyourrerportfromReportServer/ReportName_' + CONVERT(VARCHAR(8), GETDATE(), 112),
[Name] = '/ReportName_' + CONVERT(VARCHAR(8), GETDATE(), 112)
WHERE ItemID ='<ItemID>'
4. Create another job with a name like "RemoveDatexyzreport" and add a step in it with the code
UPDATE [Catalog]
SET [Path] = 'PathofyoureportinReportserver/ReportName',
[Name] = 'ReportName''
WHERE ItemID = '<ItemID>'
5. Now set your AppendDatetoxyzreport job schedule to execute first, then the Report's subscription and then the RemoveDatexyzreport.
By doing this the first job changes the name to include present date, then you get you report and the next job sets it back to its previous name. -
What data is contained in the export file for a transportable tablespace?
"This operation will generate an export file, jan_sales.dmp. The export file will be small, because it contains only metadata. In this case, the export file will contain information describing the table temp_jan_sales, such as the column names, column datatype, and all other information that the target Oracle database will need in order to access the objects in ts_temp_sales."
What is the other information?I don't have the sales dump. I was only quoting from the documentation. What i have is an export of 18 tablespaces - 9 partitioned data tablespaces and 9 partitioned index tablespaces.
What I was trying to determine was if I could still import the tablespaces if the tablespace files were from a time that was after (one week) the original export of the metadata or if there was some bit of information that was held in the metadata which would prevent this. -
Create directory with system date for export files
Hello,
I have a requirement to add the system date folder automatically, as the exports files dumps get into to d:\oracle\exp\..... this should be d:\oracle\exp\03182009. for each day it should create these date folder to dump the export files.
here I am sending two of my scirpts .
first one is
set ORACLE_SID=%1
set PATH=D:\ORACLE\ora102\bin;C:\WINNT\system32;C:\WINNT
set LOCAL=2:%1
net use h: /delete
net use login
sqlplus test/test@%1 @d:\oracle\dba\apps\expfull\buildfile < d:\oracle\dba\apps\expfull\enter.txt
exp parfile=h:\admin\%1\exp\%1_FULL.par
call h:\admin\%1\exp\%1_FULL.cmd
this is the buildfile script
set term on
set head off
set feed off
set ver off
set lines 159
set pages 0
set wrap on
set concat off
clear columns
clear computes
clear breaks
clear buffer
ttitle off
btitle off
col dbnm new_value sid_dir noprint
col parnm new_value parfil noprint
col dmpnm new_value dmpfil noprint
col lognm new_value logfil noprint
col lstnm new_value lstfil noprint
col zipnm new_value zipfil noprint
col cmdnm new_value cmdfil noprint
col ownr new_value ownername noprint
col tabnm new_value tabname noprint
col partnm new_value partname noprint
col datest new_value datestmp noprint
-- Load the sid_dir, ownername, tabname, partname, and datestmp substitution variables.
select
name dbnm
, to_char(sysdate,'YYYYMMDD')||'_'||'N' datest
, upper('&1') ownr
, upper('&2') tabnm
, upper('&3') partnm
from v$database;
-- Load the filename substitution variables.
select
'&sid_dir'||'_'||'&ownername'||'_'||'&partname'||'.par' parnm
, '&sid_dir'||'_'||'&ownername'||'_'||'&partname'||'_'||'&datestmp'||'.dmp' dmpnm
, '&sid_dir'||'_'||'&ownername'||'_'||'&partname'||'_'||'&datestmp'||'.log' lognm
, '&sid_dir'||'_'||'&ownername'||'_'||'&partname'||'_'||'&datestmp'||'.zip' zipnm
, '&sid_dir'||'_'||'&ownername'||'_'||'&partname'||'.lst' lstnm
, '&sid_dir'||'_'||'&ownername'||'_'||'&partname'||'.cmd' cmdnm
from dual;
-- Build the export parameter file.
spool h:\admin\&sid_dir\exp\&parfil
prompt userid=test/test@&sid_dir
prompt buffer=4194304
prompt direct=Y
prompt recordlength=65535
prompt consistent=Y
prompt file=h:\admin\&sid_dir\exp\&dmpfil
prompt log=h:\admin\&sid_dir\exp\&logfil
prompt tables=(&ownername.&tabname:&partname)
spool off
Please help out...>
I have a requirement to add the system date folder automatically, as the exports files
dumps get into to d:\oracle\exp\..... this should be d:\oracle\exp\03182009. for each
day it should create these date folder to dump the export files.OK - well, you will (AFAICR) have to create the directory, then cd into it,
then run your script - I don't see what the problem is?
You could* do something like -
at 00:01 run a cron job that simply does something like this
(I'm just giving outline - not a particular implementation)
runs date
gets the day, month and year from that -
export MyDay=day
export MyMonth=month
export MyYear=year
mkdir /path/to/dir/oracle/exp/$MyDay$MyMonth$MyYear
At the beginning of your script, before going into SQLPlus
----Your usual stuff------------
set ORACLE_SID=%1
set PATH=D:\ORACLE\ora102\bin;C:\WINNT\system32;C:\WINNT
set LOCAL=2:%1
net use h: /delete
net use login
++++plus
cd /path/to/dir/oracle/exp/$MyDay$MyMonth$MyYear
Maybe this will be a help to get you started - it could probably be much more elegant in
Python or whatever, but if shell-scripting is the way you want to go...
I'd be interested to see if this can provide the basis for a solution.
HTH and TIA.
Paul... -
EXPORT 시 DATE FUNCTION 을 FILE NAME 으로 이용하기
제품 : ORACLE SERVER
작성날짜 : 2002-04-08
EXPORT 시 DATE FUNCTION 을 FILE NAME 으로 이용하기
=================================================
PURPOSE
다음은 export시에 date function을 file name으로 이용하는 방법을
설명한다.
Explanation
1)DATE 를 넣어서 shell program 으로 exp 를 하는 방안은 다음과 같다.
CDATE=d`date '+%m%d%H%M%S' `;export CDATE
exp scott/tiger file=$CDATE.dmp full=y
2) UNIX prompt 에서
$setenv CDATE=d`date '+%m%d%H%M%S' `
exp scott/tiger file=$CDATE.dmp full=y
Reference Documents
--------------------Hi All,
I got the solution.
The code which i had written in the FM is wrong.
OUTPUT = SY-DATUM - 1. // Wrong.
The correct one is below :
DATA: Temp TYPE SY-DATUM.
Temp = SY-DATUM - 1.
OUTPUT = Temp.
And after that, in FM's Export tab, it should be Export parameter as OUTPUT and checked the "Pass value" check box.
For the Physical File : <F=TEST>.txt.
Here the FM which i created was FILENAME_EXIT_TEST.
When I am calling the corresponding Logical file in the APD, It is working fine.
Thanks for your inputs. -
Date concatination to export file name on windows / Novell
Hi everybody,
I need utility which can add date at end of export file. How its possible in DOS/Windows/NT/Novell.
eg: exprod.dmp --- exprod07122002.dmp
ThanksIf you're invoking export with a shell script you could do something like:
exp file=exp%date%.dmp <other options>
If you want to format the date you need to do something ugly like:
setlocal
for /F "tokens=1-4 delims=/ " %%i in ('date /t') do (
set DayOfWeek=%%i
set Month=%%j
set Day=%%k
set Year=%%l
exp file=exp%Day%%Month%%Year%.dmp <other options>
endlocal
-Antti -
Query in File to File Data Export
Hi,
I have done a project for file to file data export using this link below:
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_project_ff-to-ff/odi_project_flatfile-to-flatfile.htm
My project works fine.Here the source file is in local m/c(C:\Oracle\Middleware\Oracle_ODI1\oracledi\demo\file ).
1)What shall I do if the source file is in a remote m/c(says its' ip is 172.22.18.90)?
2)will it cause any problem if the remote m/c's os is unix
Thanks
Papai
NB:-My m/c's OS is windows 7.If you can access the other machine like a share folder then provide the same path in physical schema like \\my_other_pc_on_shared\new_folder
If you cannot access like that then create one agent on that machine which can access the path and execute your project using the agent.
Thanks
Bhabani
http://bhabaniranjan.com/ -
Exporting files, keep date
I have shot a lot of raw files and imported them in LR 5.5
When I have edited and selected the pictures, I export them to jpg files for use on the internet.
I included all metadata info and remove location info is selected.
After the export, the files have a creating date of today and not the original date from the RAW file.
Is there a way to set the original date directly from the Lightroom export?I don't believe so as the exported files are new iterations from the original raw data. They were created today when exported hence that date. Doesn’t mean you couldn't figure a way to somehow get the raw data created into some other metadata field but in terms of what the Finder reports, it has to report when that file was created.
Maybe you are looking for
-
Im making a simple portfolio site which has three different gallery links in the menu. Each gallery has a set of thumbnails across the right hand side of the screen, which when clicked make a larger version of the image appear in the left. Whats the
-
ORA-19202: Error occurred in XML processingLSX-00023: unknown namespace URI
Hi there I am trying to register an XSD document as an XML schema on the database. I am using Oracle 9i release 9.2.0.5.0 and using the dbms_xmlschema.registerSchema method in PL/SQL. I am getting the following error when trying to register: ORA-1920
-
Iphone 2g wont sync video songs from library
No videos are appearing in iphone-videos list, although i have them in my library. (mp4 files, i can play them in itunes, but unable to sync to iphone)
-
Error #2101 when loading variables?
Hi! I got a problem with loading varables form a php file, here's tha as code: var loader:URLLoader = new URLLoader(); // specify format as being variables loader.dataFormat = URLLoaderDataFormat.VARIABLES; loader.addEventListener(Event.COMPLETE, var
-
How can i find contact information from phone?
While I was attempting to update the OS to 4.3.5 on the Iphone and error occurred that prevented me from completing the update. I ejected my phone from iTunes and immediately got a message on the phone telling me to connect to iTunes. When re-conne