Running a PERFORM parallel jobs

Hi Experts,
I have a dynamic internal table which has a ONE lakh records filled at run time. I am doing some logic to format some data based on the records filled up. When I excute this program I am getting lot of time in this. Becasue this is dynamic internal table and data is very huge.
So, I need to run this logic in a perform and I have to call this in back ground .
Can you guys help me out how to split the data and call this in BG.
Thanks in Advance
Gow

Hi,
You can do this if u want to execute in background mode. Follow the below steps.
1) Create a second program. This program will be executed in background when called from the first program using SUBMIT statement. The data will be imported into this second program using IMPORT statement into an internal table. Write your required logic using that internal table entries.
2) In your first program as you said u want to split the records and process in background, u can split and populate into a separate internal table and then call the second program using SUBMIT statement. Before calling this second program you have to export the internal table using EXPORT statement.
If the data gets successfully processed in the second program then for every call a spool will be generated. you can check the background job status in SM37 transaction.
thanks,
sksingh

Similar Messages

  • Query to Report on Parallel Jobs Running

    Morning!
    I would like to get a query that reports on my parallel jobs.
    For each minute that a procedure is running I would like to know what stages are running.
    I log the whole procedure in a table called run_details and the start and end of each stage in a table called incident.
    I'm running Oracle 9i
    Here is some sample data based on 2 threads Expected output at the bottom
    SQL>CREATE TABLE run_details
      2  (run_details_key  NUMBER(10)
      3  ,start_time       DATE
      4  ,end_time         DATE
      5  ,description      VARCHAR2(50)
      6  );
    SQL>CREATE TABLE incident
      2  (run_details_key NUMBER(10)
      3  ,stage           VARCHAR2(20)
      4  ,severity        VARCHAR2(20)
      5  ,time_stamp      DATE
      6  );
    SQL>INSERT INTO run_details
      2  VALUES (1
      3         ,TO_DATE('08/10/2007 08:00','DD/MM/YYYY HH24:MI')
      4         ,TO_DATE('08/10/2007 08:10','DD/MM/YYYY HH24:MI')
      5         ,'Test'
      6         );
    SQL>INSERT INTO incident
      2  VALUES (1
      3         ,'Stage1'
      4         ,'START'
      5         ,TO_DATE('08/10/2007 08:00','DD/MM/YYYY HH24:MI')
      6         );
    SQL>INSERT INTO incident
      2  VALUES (1
      3         ,'Stage1'
      4         ,'END'
      5         ,TO_DATE('08/10/2007 08:08:53','DD/MM/YYYY HH24:MI:SS')
      6         );
    SQL>INSERT INTO incident
      2  VALUES (1
      3         ,'Stage2'
      4         ,'START'
      5         ,TO_DATE('08/10/2007 08:00','DD/MM/YYYY HH24:MI')
      6         );
    SQL>INSERT INTO incident
      2  VALUES (1
      3         ,'Stage2'
      4         ,'END'
      5         ,TO_DATE('08/10/2007 08:04:23','DD/MM/YYYY HH24:MI:SS')
      6         );
    SQL>INSERT INTO incident
      2  VALUES (1
      3         ,'Stage3'
      4         ,'START'
      5         ,TO_DATE('08/10/2007 08:04:24','DD/MM/YYYY HH24:MI:SS')
      6         );
    SQL>INSERT INTO incident
      2  VALUES (1
      3         ,'Stage3'
      4         ,'END'
      5         ,TO_DATE('08/10/2007 08:10','DD/MM/YYYY HH24:MI')
      6         );
    SQL>select * from incident;
    RUN_DETAILS_KEY STAGE      SEVERITY   TIME_STAMP
                  1 Stage1     START      08/10/2007 08:00:00
                  1 Stage1     END        08/10/2007 08:08:53
                  1 Stage2     START      08/10/2007 08:00:00
                  1 Stage2     END        08/10/2007 08:04:23
                  1 Stage3     START      08/10/2007 08:04:24
                  1 Stage3     END        08/10/2007 08:10:00  So stages 1 and 2 run in parallel from 08:00, then at 08:04:23 stage 2 stops and a second later stage 3 starts.
    set some variables
    SQL>define start_time = null
    SQL>col start_time new_value start_time
    SQL>define end_time = null
    SQL>col end_time new_value end_time
    SQL>
    SQL>SELECT start_time-(1/(24*60)) start_time
      2        ,end_time
      3  FROM   run_details
      4  WHERE  run_details_key =  1;
    START_TIME          END_TIME
    08/10/2007 07:59:00 08/10/2007 08:10:00Get every minute that the process is running for:
    SQL>WITH t AS (SELECT TRUNC(TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss'),'MI') + rownum/24/60 tm
      2             FROM   dual
      3             CONNECT BY ROWNUM <= (TO_DATE('&end_time','dd/mm/yyyy hh24:mi:ss')
      4                                   -TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss')
      5                                  )*24*60
      6            )
      7  SELECT tm
      8  FROM t;
    old   1: WITH t AS (SELECT TRUNC(TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss'),'MI') + rownum/24/60 tm
    new   1: WITH t AS (SELECT TRUNC(TO_DATE('08/10/2007 07:59:00','dd/mm/yyyy hh24:mi:ss'),'MI') + rownum/24/60 tm
    old   3:            CONNECT BY ROWNUM <= (TO_DATE('&end_time','dd/mm/yyyy hh24:mi:ss')
    new   3:            CONNECT BY ROWNUM <= (TO_DATE('08/10/2007 08:10:00','dd/mm/yyyy hh24:mi:ss')
    old   4:                                -TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss')
    new   4:                                -TO_DATE('08/10/2007 07:59:00','dd/mm/yyyy hh24:mi:ss')
    TM
    08/10/2007 08:00:00
    08/10/2007 08:01:00
    08/10/2007 08:02:00
    08/10/2007 08:03:00
    08/10/2007 08:04:00
    08/10/2007 08:05:00
    08/10/2007 08:06:00
    08/10/2007 08:07:00
    08/10/2007 08:08:00
    08/10/2007 08:09:00
    08/10/2007 08:10:00
    11 rows selected.Get stage, start & end times and duration
    SQL>SELECT ai1.stage
      2        ,ai1.time_stamp start_time
      3        ,ai2.time_stamp end_time
      4        ,SUBSTR(numtodsinterval(ai2.time_stamp-ai1.time_stamp, 'DAY'), 12, 8) duration
      5  FROM   dw2.incident ai1
      6  JOIN   dw2.incident ai2
      7         ON ai1.run_details_key = ai2.run_details_key
      8         AND ai1.stage = ai2.stage
      9  WHERE ai1.severity = 'START'
    10  AND ai2.severity = 'END'
    11  AND ai1.run_details_key  = 1
    12  ORDER BY ai1.time_stamp
    13  /
    STAGE      START_TIME          END_TIME            DURATION
    Stage1     08/10/2007 08:00:00 08/10/2007 08:08:53 00:08:52
    Stage2     08/10/2007 08:00:00 08/10/2007 08:04:23 00:04:22
    Stage3     08/10/2007 08:04:24 08/10/2007 08:10:00 00:05:36Then combine both (or do something else) to get this:
    TM                  THREAD_1 THREAD_2
    08/10/2007 08:00:00 Stage1   Stage2
    08/10/2007 08:01:00 Stage1   Stage2
    08/10/2007 08:02:00 Stage1   Stage2
    08/10/2007 08:03:00 Stage1   Stage2
    08/10/2007 08:04:00 Stage1   Stage2
    08/10/2007 08:05:00 Stage1   Stage3
    08/10/2007 08:06:00 Stage1   Stage3
    08/10/2007 08:07:00 Stage1   Stage3
    08/10/2007 08:08:00 Stage1   Stage3
    08/10/2007 08:09:00          Stage3
    08/10/2007 08:10:00          Stage3Ideally I'd like this to work for n-threads, as I want this to run on different environments that have different numbers of CPUs.
    Thank you for your time.

    > Ideally I'd like this to work for n-threads, as I want this to run on
    different environments that have different numbers of CPUs.
    The number of CPUs are not always a good indication of the processing load that a platform can take - especially when the processing load involves a lot of I/O.
    You can have 99% CPU idle time with a 1000 active processes... as that idle time is in fact CPU time spend waiting on I/O completion. Courtesy of a severely strained I/O channel that is the bottleneck.
    Another factor is memory (resources). You for example have 4 CPUs with 8GB physical memory.. where a single process (typically a Java VM for a complex process) grabs a huge amount of memory. Assuming that 4 threads/CPU or 1 threads/CPU can be a severe overestimate due to the amount of memory needed. Getting this wrong leads in turn to excessive virtual memory paging and reduces the platform's performance drastically.
    CPU alone is a very poor choice when deciding on the platform's capacity to run parallel processes.

  • We are running 3 batches parallel(a.ksh,b.ksh,c.ksh parallel ) .if a.ksh will complete then d.ksh will start('d' start when 'a' will terminate successfully) and we have to handle error for all jobs ( if some job got aborted during runtime) ?

    we are running 3 batches parallel(a.ksh,b.ksh,c.ksh parallel ) .if a.ksh will complete then d.ksh will start('d' start when 'a' will terminate successfully) and we have to handle error for all jobs ( if some job got aborted during runtime) ?

    Moderator Action:
    You already asked this question, two days earlier.
    https://forums.oracle.com/thread/2585158
    Stay with your original post.   Deliberate multiple posting is the same as spamming the forums.
    This new post is locked.

  • Using DAC for running Non BIApps infa jobs n running 2 EP in parallel

    Hi,
    We have already setup BI Apps prod environment using DAC, Informatica and OBIEE 11g for one of our customer.
    Now, we want to check possibility of using DAC for running Non BIApps related informatica jobs.
    (As we have only weekly run of DAC execution plan on weekends and Informatica and DAC are idle most of the time during weekdays)
    Customer wants a separate new small datamart to be setup which will cater reporting requirements for different department and has no relation or any link with existing BI Apps datawarehouse.
    Just wanted to check if it will violate licensing terms (if we use DAC for Non BI Apps workflows and run another EP)?
    Also, is DAC Build 10.1.3.4.1 capable of running two execution plans in parallel?
    We heard long back that running two EP parallel feature will be lunched in DAC 11g version. Any pointers or new in this space?
    Thanks in Advance,

    From what I recall, you CANNOT load a "separate" DB instance that is NOT OBIA. If you create a small custom datamart INSIDE the exitsing OBIA schema, then it is acceptable. However, if you are using DAC (regardless of if its one plan or two plans) to load a NON-OBIA target, this may violate the licensing agreement. You may need a separate standalone license for Informatica and use Informatica's scheduler tool. If you want to use DAC, make sure your target is inside the OBIA DW.
    Pls mark correct...

  • Problem in table locking while running the parallel jobs (deadlock?)

    Hi,
    I am trying to delete the entried from a custom table. If I schedule the parallel jobs, I am getting the following dump while deleting the entries from table. This is happening for custom table ( in CRM system).
    Exception : DBIF_RSQL_SQL_ERROR
    And complaining that, deadlock occured. I am not sure what it is..
    Could you please help me out, how can we solve this issue?
    Thanks,
    Sandeep

    Hello Sandeep ,
    I would suggest you Record based  Lock Objects ,
    take a look at below link...to get a general idea..!
    <link removed>
    Hope it helps,
    Thanks and Regards,
    Edited by: Suhas Saha on Aug 19, 2011 11:27 AM

  • Service template problem - Unable to perform the job because one or more of the selected objects are locked by another job - ID 2606

    Hello,
    I’ve finally managed to deploy my first guest cluster with a shared VHDX using a service template. 
    So, I now want to try and update my service template.  However, whenever I try to do anything with it, in the services section, I receive the error:
    Unable to perform the job because one or more of the selected objects are locked by another job.  To find out which job is locking the object, in the jobs view, group by status, and find the running or cancelling job for the object.  ID 2606
    Well I tried that and there doesn’t seem to be a job locking the object.  Both the cluster nodes appear to be up and running, and I can’t see a problem with it at all.  I tried running the following query in SQL:
    SELECT * FROM [VirtualManagerDB].[dbo].[tbl_VMM_Lock] where TaskID='Task_GUID'
    but all this gives me is an error that says - conversion failed when converting from a character string to uniqueidentifier msg 8169, level 16, State 2, Line 1
    I'm no SQL expert as you can probably tell, but I'd prefer not to deploy another service template in case this issue occurs again.
    Can anyone help?

    No one else had this?

  • Unable to remove a host from VMM - Error (2606) Unable to perform the job because one or more of the selected objects are locked by another job.

    I am unable to remove a host from my Virtual Machine Manager 2012 R2. I receive the following error:
    Error (2606)
    Unable to perform the job because one or more of the selected objects are locked by another job.
    Recommended Action
    To find out which job is locking the object, in the Jobs view, group by Status, and find the running or canceling job for the object. When the job is complete, try again.
    I have already tried running the following command in SQL Server Management Studio
    SELECT * FROM [VirtualManagerDB].[dbo].[tbl_VMM_Lock] where TaskID='Task_GUID'
    I received this error back:
    Msg 8169, Level 16, State 2, Line 1
    Conversion failed when converting from a character string to uniqueidentifier.
    I have also tried rebooting both the host and the Virtual Machine Manager Server.  After rebooting them both, I still receive the same error when trying to remove the host.
    Here are my server details
    VMM Server OS = Windows 2012 Standard
    VMM Version = 2012 R2 3.2.7510.0
    Host OS = Windows 2012 R2 Datacenter
    Host Agent Version = 3.2.75.10.0
    SQL Server OS = Windows 2012 Datacenter
    SQL Version = 2012 SP 1 (11.0.3000.0)

    Hi there,
    How many hosts are you managing with your VMM server?
    The locking job might be the background host refresher job. Did you see any jobs in the jobs view, when the host removal job failed?
    If there is no active jobs in the jobs view when this host removal job fails, can you please turn on the VMM tracing, retry the host removal, and paste back the traces for the failed job (search for exception and paste the whole stack)?
    Thanks!
    Cheng

  • Load Distribution Versus Parallel Jobs processing

    Hello,
    I would request feedback.
    In Industry Solutions - PS, for mass activity - Automatic Clearing run, there is an Technical Settings tab.
    This tab has Load distribution settings and an value that can be defined for Number of Jobs.
    If I put an value greater than 1, so many number of jobs gets executed in the background.
    I would request clarification on the following :
    1. What is its use / benefits of putting defining an automatic load distribution ?
    2. Does this setting equals the Config settings for IMG - Finnancial Accounting- Contract AR - Technical Settings - Prepare Mass.
    In here, we can set parallel jobs that can be initiated by providing an maximum number of jobs under the job control panel.
    Does this setting override Load distribution settings or vice-versa  OR IS this an different setting ?
    3. What is the difference between Load distribution versus parallel Jobs ?
    Any feedback, most welcome and appreciated.
    I am functional consultant and hence unable to distinguish between the two.
    I am not sure if this query needs to be posted here, but I have also posted the same in IS fourm too.
    Regards
    Bala
    email : [email protected]

    Hi Bala,
    In any Mass Activity you will find Technical Settings tab. There you will find Parallel Processing Object and Load Distribution. In Parallel Processing Object, you can select object according to Input depending upon which you will able to divide the job. For example, GPART is used when job have to divide depending upon Business partner. In Maintain Variants you have to create Variants.You have to give Variant name and value in either Interval length or Number of Intervals. Interval length will decide what is the maximum number of object will be processed in a single part. Number of Interval is the number which tells job will be divided in how many parallel process.
    For example, let you r running a parallel process for 1000 Business Partner.You choose GPART as object.Now if you put 200 in interval length it will divide job in 5 parallel process.If you put 10 in number of interval it will divide the job in 10 parallel process each of 100 Business partner.
    After creating variant, you use it in technical setting.
    Let u already define variants which divide the job in 30 but you want maximum number of parallel process will be executed at a time is 6.Then put 6 in Number of jobs field in automatic load distribution.
    Hope this can able to clear your question. Still if have any doubt feel free to ask me.
    Thanks and regards,
    Jyotishankar Dutta

  • Billing Set up - Parallel jobs and size of set up table

    Hello All,
    I want to do a billing (13) set up using comp code, sales org and document number range as selection criteria.
    Would it be okay to to run jobs in parallel. I will be setting up jobs for different variants containing different document ranges. Would there be a conflict in running parallel jobs if they belong to same company codes/ sales org?
    Q.2 Is there a limit to which we can fill up the set up table and will need to empty it after transferring the data into BI?
    Thanks!
    Edited by: BI Quest on Oct 7, 2009 6:22 PM

    You can run multiple concurrent setup jobs without an issue. I'd recommend that you only execute 4 concurrently, however, unless you have a huge amount of memory on your source R3/ECC server(s). If you're using the Billing Document as part of the selection criteria, there shouldn't be a conflict. In fact, if you're using Billing Document as the selection criteria for your multiple setups, the Company Code and Sales Org designations really aren't necessary, unless the Billing Document numbering in your source R3/ECC environment has been configured to number based on Company Code and/or Sales Org, whereby you could have duplicate Billing Document numbers and the way to distinguish between them is to further qualify by Company Code and/or Sales Org.

  • How to run DBMS_JOB in parallel and Serial

    I have total 8 procedure to run in parallel . and after that my 9th procedure should run.
    below is my job submission procedure
    create or replace procedure DURATION_ALARM_WEEKLY as
    l_job number ;
    begin
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_CALL_OUT ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_CALL_IN ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_DURATIN_OUT ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_DURATIN_IN ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_SMS_OUT ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_SMS_IN ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_SHORT_CALL_OUT ; end;');
    dbms_job.submit(l_job,'begin ALARMS_WEEKLY_IMEI_CHANGE ; end;');
    dbms_job.submit(l_job,'begin FINALE ; end;');
    COMMIT;
    end;
    what is the syntax I have to do in my FINALE procedure . using DBMS_ALERT.REGISTER , DBMS_ALERT.WAITANY .....?
    I read many article , but i did not understand where and which order I have to write the syntax.
    Edited by: OraFighter on Jul 26, 2012 11:31 AM

    All process on Oracle runs serial. There is no threading inside a process. So a DBMS_JOB process is a serial process. It is started by the system. It executes. It terminates.
    Parallel processing means running a number of these serial processes in parallel. Typically you will break up the unit of work to do into smaller units. And then run a serialised process to do a unit of work - by starting a number of these at the same time, you have parallel processing.
    So if you start 9 jobs (processes) at the same time, and there is sufficient job processing capacity, all 9 processes will run at the same time.
    If you want to have the 9th process wait for the others to finish first, you need some kind of logic to either start the 9th process after the other 8 have completed - or you need logic in the 9th process so that it could spin and wait for the 8 others to complete, before it starts its processing.
    The first method is fairly easy. Your code start each of the 8 job processes. The DBMS_JOB.Submit() call gives you the job number of that process. You will thus have 8 job numbers. You can now loop in your code, wait a minute or more, and then check the dictionary view user_jobs to determine if these 8 job numbers still exist. If so, the job is still scheduled and/or executing.
    If none of the 8 job numbers exist in the view, the 9th job process can be started.
    Other methods are more complex. For example, the 8 job processes can send a notification that they are completed to the 9th process. This requires additional logic and code in all 9 processes. And you also need to deal with issues like one of the 8 processes completing before the 9th process started - so how will it then know that one of the 8 processes as completed? Etc.

  • [solved] make -j (parallel jobs) in PKGBUILD ?

    Hi,
    I just rediscovered option -j in make, that lets make runs parallel jobs. On my computer (4 cores, ssd), it speeds up things.
    I was wondering if it was clean/permitted/a good idea to use this in PKGBUILDs, by automatically adjusting the -j parameter to the number of cores, or half the number of cores ?
    Cheers,
    Charles
    Last edited by cgo (2014-03-12 12:48:45)

    This should not be added to PKGBUILDs.  Makepkg already sets this if the user has opted for it in /etc/makepkg.conf.  If you try to override this you will be using a setting that works best on your machine to override a setting the user has found works best on their own machine.
    For your own use, just set -j4 in makepkg.conf on your system.
    Last edited by Trilby (2014-03-12 12:11:04)

  • Pin dbms_cube.build parallel jobs to specific node on RAC

    Is there a way to Pin dbms_cube.build parallel jobs to specific node on RAC.Currently the job with say parallelism of 10 spans over all nodes of RAC.
    IS there a way to control it so the child jobs runs on subset of nodes. Unable to see how we can tie job classes and services with dbms_cube.build .
    Any suggestions will be hugely appreciated.

    Used undocumented JOB_CLASS and it seems to working fine.
    SQL> desc sys.dbms_cube.build
    Parameter Type Mode Default?
    SCRIPT VARCHAR2 IN
    METHOD VARCHAR2 IN Y
    REFRESH_AFTER_ERRORS BOOLEAN IN Y
    PARALLELISM BINARY_INTEGER IN Y
    ATOMIC_REFRESH BOOLEAN IN Y
    AUTOMATIC_ORDER BOOLEAN IN Y
    ADD_DIMENSIONS BOOLEAN IN Y
    SCHEDULER_JOB VARCHAR2 IN Y
    MASTER_BUILD_ID BINARY_INTEGER IN Y
    REBUILD_FREEPOOLS BOOLEAN IN Y
    NESTED BOOLEAN IN Y
    JOB_CLASS VARCHAR2 IN Y

  • How Can I Run a SQL Loader Job from Schedular

    How Can I Run a SQL Loader Job from Schedular , So that It Runs every Day.

    Depends on a couple of factors.
    If you are on a UNIX platform, you can create a shell script and schedule it with cron.
    If you are on a Windows platform, you can create a batch file and schedule it with the Windows scheduler.
    Or, if you are on Oracle 9i or 10g, you could use the external table feature instead of SQL*Loader. Then you could write a stored procedure to process the external table and schedule it using the Oracle scheduler (DBMS_JOB). This would probably be my preference.

  • Adobe Acrobat XI pro version, Windows 7, running on iMac parallels, converting pdf to a pdf with reduced size is not possible, error: error in converting the file! What to do? Its a bit annoying not to be able to store pdf files in reduced size, any idea?

    Adobe Acrobat XI pro version, Windows 7, running on iMac parallels, converting pdf to a pdf with reduced size is not possible, error: error in converting the file! What to do? Its a bit annoying not to be able to store pdf files in reduced size, any idea?? Thanks, Jörg

    Hi Jorg ,
    Are you trying to reduce the file size with the "Reduced size PDF" in the save as other option.
    Give it a try if you haven't done it prior.
    Open that PDF>File>Save as Other>Reduced size PDF.
    If possible ,please share the snapshot of the error message with us so that we can have a look in order to assist you further.
    Regards
    Sukrit Dhingra

  • Is InfoPackage Group is the only way to run the InfoPackages Parallel???

    Hello BW Experts,
    If we want to divide a huge full upload InfoPackage with certain selection ranges in each InfoPackage, do we have to run them one by one according to our selcetions or can we run them parallel to each other??
    If we run these IPs parallel, is InfoPackage group is the only way or is there any other option and if yes, which one is better??
    plz help..!!
    Thanks & regards,
    Sapster.
    (assure points)

    Hi,
    Please do not assure us of your points...points are integral to SDN and everyone knows about them.
    In your previous posts, you have been advised that InfoPackage groups are obsolete and that you should proceed with process chains. Have you read those replies?
    Step-by-Step procedure to create an InfoPackage Group & a Scenario!!
    InfoPackage is one option. The better, more widely used and recommended option is using Process Chains.
    Hope this helps...

Maybe you are looking for

  • Mac Pro 3,1 Stuck on Grey screen for 1 min during boot

    I have a Late 2008 Mac Pro 3,1 and have started experiencing a very slow boot up.   The computer gets stuck for over a minute on the initial gray screen after the first boot chime.  Once the apple logo finally appears the boot goes normally.  Once ru

  • In MacOSX_Safari v1.2, content are not wrapped in PDF

    Try to stream out the pdf with carriage return between the content, it works for other browsers, except for Safari v1.2, somehow it puts all content in one line without the carriage return. I use "\n" as carriage return. I also try to put "\r" along

  • Calling all Apple geniuses......Help!!

    iTunes is giving me an error message "An iPod has been detected, but it could not be identified properly. Please disconnect and reconnect the iPod, then try again. If the problem persists, uninstall iTunes , then install iTunes again." This is only o

  • Add or edit geotag (GPS) data with Photos App

    Geotag is an EXIF metadata associated with a single photo that give the geographical location of where the photo has been taken. At this time, the geotag EXIF is fully supported by Photos (we see the location on a map) and we have the option to strip

  • Spacing between Words of Variables in Header

    I have spent a lot of time on this and tried everything I can think of, and I have been unable to find a similar thread on this forum. I have now given up and come here for some guidance. The template I am using is set up with A4 paper size, double-s