Extract CDATA section from a KML file
Hi all,
I have a kml file as shown below.
I need to extract the path of all the files listed under the Images section and as well under the Links section.
Can i get some hwlp...
currentl am able to retrieve the cdata text content as a string and hence difficult to parse it further...
Thanx for any help!!!
/*kml file */
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/.1">
<Document>
<Placemark>
<Description>
<![CDATA[
Images:
<br/>
<img src="D:\dk work\Antons shared\TPAssist-code for v 1.3-KML\fr\42902l.jpg"/>
<br/>
<hr/>
<br/>
<img src="D:\dk work\Antons shared\TPAssist-code for v 1.3-KML\fr\DSC02501.BMP"/>
<br/>
<hr/>
Links:
<br/>
<a href="D:\dk work\Antons shared\TPAssist-code for v 1.3-KML\fr\guitar.txt"/></a>
<br/>
<hr/>
<br/>
<a href="D:\dk work\Antons shared\TPAssist-code for v 1.3-KML\fr\generatekml.txt"/></a>
<br/>
<hr/>
]]>
</Description>
<Point>
<coordinates>0,12,0</coordinates>
</Point>
</Placemark>
<Placemark>
<Point>
<coordinates>0,23,0</coordinates>
</Point>
</Placemark>
</Document>
</kml>
DK_11 wrote:
i tried to use the regular expressions on the string that i obtained...
Images:
<br/>
<img src="D:\dk work\Antons shared\TPAssist-code for v 1.3-KML\fr\42902l.jpg"/>
<br/>
<hr/>
<br/>
<img src="D:\dk work\Antons shared\TPAssist-code for v 1.3-KML\fr\ATT1107594.jpg"/>
<br/>
<hr/>The above is the content of the string that i obtained from the kml file using DOM...
I need to obtain the content of the src attribute which is the path of a file...
can i get help with regular expression usage for this string...Sure you can get help but we need to see your regex code so we can suggest appropriate modifications.
Similar Messages
-
Reference a DSL from CDATA section in an Xml file
I use a DSL along with some XML files in an Eclipse project. I would like to reference some of the DSL entities from within a CDATA section in the XML files and provide content assist for doing so.
Would I need to define a DSL for the XML and then implement some DSL cross referencing ?
Would I lose the default XML features of Eclipse if I do so ?
Is there a way to overload the XML support built into Eclipse for implementing this ?I don't know about the extension capabilities of the XML editor. You would need to investigate how this editor can be extended first. This might be a question for Eclipse Web Tools Platform, if you use that XML editor (there are several XML editor plugins available).
-
How can I use Automator to extract specific Data from a text file?
I have several hundred text files that contain a bunch of information. I only need six values from each file and ideally I need them as columns in an excel file.
How can I use Automator to extract specific Data from the text files and either create a new text file or excel file with the info? I have looked all over but can't find a solution. If anyone could please help I would be eternally grateful!!! If there is another, better solution than automator, please let me know!
Example of File Contents:
Link Time =
DD/MMM/YYYY
Random
Text
161 179
bytes of CODE memory (+ 68 range fill )
16 789
bytes of DATA memory (+ 59 absolute )
1 875
bytes of XDATA memory (+ 1 855 absolute )
90 783
bytes of FARCODE memory
What I would like to have as a final file:
EXCEL COLUMN1
Column 2
Column3
Column4
Column5
Column6
MM/DD/YYYY
filename1
161179
16789
1875
90783
MM/DD/YYYY
filename2
xxxxxx
xxxxx
xxxx
xxxxx
MM/DD/YYYY
filename3
xxxxxx
xxxxx
xxxx
xxxxx
Is this possible? I can't imagine having to go through each and every file one by one. Please help!!!Hello
You may try the following AppleScript script. It will ask you to choose a root folder where to start searching for *.map files and then create a CSV file named "out.csv" on desktop which you may import to Excel.
set f to (choose folder with prompt "Choose the root folder to start searching")'s POSIX path
if f ends with "/" then set f to f's text 1 thru -2
do shell script "/usr/bin/perl -CSDA -w <<'EOF' - " & f's quoted form & " > ~/Desktop/out.csv
use strict;
use open IN => ':crlf';
chdir $ARGV[0] or die qq($!);
local $/ = qq(\\0);
my @ff = map {chomp; $_} qx(find . -type f -iname '*.map' -print0);
local $/ = qq(\\n);
# CSV spec
# - record separator is CRLF
# - field separator is comma
# - every field is quoted
# - text encoding is UTF-8
local $\\ = qq(\\015\\012); # CRLF
local $, = qq(,); # COMMA
# print column header row
my @dd = ('column 1', 'column 2', 'column 3', 'column 4', 'column 5', 'column 6');
print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
# print data row per each file
while (@ff) {
my $f = shift @ff; # file path
if ( ! open(IN, '<', $f) ) {
warn qq(Failed to open $f: $!);
next;
$f =~ s%^.*/%%og; # file name
@dd = ('', $f, '', '', '', '');
while (<IN>) {
chomp;
$dd[0] = \"$2/$1/$3\" if m%Link Time\\s+=\\s+([0-9]{2})/([0-9]{2})/([0-9]{4})%o;
($dd[2] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of CODE\\s/o;
($dd[3] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of DATA\\s/o;
($dd[4] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of XDATA\\s/o;
($dd[5] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of FARCODE\\s/o;
last unless grep { /^$/ } @dd;
close IN;
print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
EOF
Hope this may help,
H -
Which Version of Adobe do I need to be able to "extract" a page from a existing file and save/download to another file?
Acrobat Pro or Standard.
-
Extracting specific data from multiple text files to single CSV
Hello,
Unfortunately my background is not scripting so I am struggling to piece together a powershell script to achieve the below. Hoping an experienced powershell scripter can provide the answer. Thanks in advance.
I have a folder containing approx. 2000 label type files that I need to extract certain information from to index a product catalog. Steps to be performed within the script as I see are:
1. Search folder for *.job file types
2. Search the files for certain criteria and where matched return into single CSV file
3. End result should be a single CSV with column headings:
a) DESCRIPTION
b) MODEL
c) BARCODETry:
# Script to extract data from .job files and report it in CSV
# Sam Boutros - 8/24/2014
# http://superwidgets.wordpress.com/category/powershell/
$CSV = ".\myfile.csv" # Change this filename\path as needed
$Folders = "d:\sandbox" # You can add multiple search folders as "c:\folder1","\\server\share\folder2"
# End Data entry section
if (-not (Test-Path -Path $CSV)) {
Write-Output """Description"",""Model"",""Barcode""" | Out-File -FilePath $CSV -Encoding ascii
$Files = Get-ChildItem -Path $Folders -Include *.job -Force -Recurse
foreach ($File in $Files) {
$FileContent = Get-Content -Path $File
$Keyword = "viewkind4"
if ($FileContent -match $Keyword) {
for ($i=0; $i -lt $FileContent.Count; $i++) {
if ($FileContent[$i] -match $Keyword) {
$Description = $FileContent[$i].Split("\")[$FileContent[$i].Split("\").Count-1]
} else {
Write-Host "Keyword $Keyword not found in file $File" -ForegroundColor Yellow
$Keyword = "Code:"
if ($FileContent -match $Keyword) {
for ($i=0; $i -lt $FileContent.Count; $i++) {
if ($FileContent[$i]-match $Keyword) {
$Parts = $FileContent[$i].Split(" ")
for ($j=0; $j -lt $Parts.Count; $j++) {
if ($Parts[$j] -match $Keyword) {
$Model = $Parts[$j+1].Trim()
$Model = $Model.Split("\")[$Model.Split("\").Count-1]
} else {
Write-Host "Keyword $Keyword not found in file $File" -ForegroundColor Yellow
$Keyword = "9313"
if ($FileContent -match $Keyword) {
for ($i=0; $i -lt $FileContent.Count; $i++) {
if ($FileContent[$i] -match "9313") {
$Index = $FileContent[$i].IndexOf("9313")
$Barcode = $null
for ($j=0; $j -lt 12; $j++) {
$Barcode += $FileContent[$i][($Index+$j)]
} else {
Write-Host "Keyword $Keyword not found in file $File" -ForegroundColor Yellow
Write-Output "File: '$File', Description: '$Description', Model: '$Model', Barcode: '$Barcode'"
Write-Output """$Description"",""$Model"",""$Barcode""" | Out-File -FilePath $CSV -Append -Encoding ascii
Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) -
Extract thumbnail images from a cache file without originals?
After an external drive crash, I'm left with only the Bridge (CS2) cache files for a folder full of images. Does anyone know a way to extract the thumbnails from the cache as jpgs or some other standalone format when you *don't* have the original image files? They would be low-res, but better than nothing. Thanks in advance.
>>says that this export technique does not work, it produces a low res screen version of the file.
Are you certain the original files were any better?
My preferred method of doing this works well, but it takes a few extra steps. I'd make a high-res PDF out of the PM file, then pick apart the PDF to extract the graphics. Are the graphics raster or vector? If they're raster, you can use Acrobat's Touch-up Object tool to open them in Photoshop. If they're vector, you can open the PDF in Illustrator and save out the graphics from there.
HTH -
How can i extract the text from the PDF files,Power point files,Word files?
hi friends,
i need to extract text from the PDF files,Power Point,Ms word files.Is it possible with java?if yes how can i extract text from those files.please give solution this problem.i would be thankful if u provide solution.
regards,
prakash.Find an API which could read each of those files and start coding.
-
How to use automator to extract specific text from json txt file
I'm trying to set up an Automator folder action to extract certain data from json files. I'm pulling metadata from YouTube videos, and I want to extract the Title of the video, the URL for the video, and the date uploaded.
Sample json data excerpts:
"upload_date": "20130319"
"title": "[title of varying length]"
"webpage_url": "https://www.youtube.com/watch?v=[video id]"
Based on this thread, seems I should be able to have Automator (or any means of using a shell script) find data and extract it into a .txt file, which I can then open as a space delimited file in Excel or Numbers. That answer assumes a static number of digits for the text to be extracted, though. Is there a way Automator can search through the json file and extract the text - however long - after "title" and "webpage_url"?
json files are all in the same folder, and all end in .info.json.
Any help greatly appreciated!Hello
You might try the following perl script, which will process every *.json file in current directory and yield out.csv.
* CSV currently uses space for field separator as you requested. Note that Numbers.app cannot import such CSV file correctly.
#!/bin/bash
/usr/bin/perl -CSDA -w <<'EOF' - *.json > out.csv
use strict;
use JSON::Syck;
$JSON::Syck::ImplicitUnicode = 1;
# json node paths to extract
my @paths = ('/upload_date', '/title', '/webpage_url');
for (@ARGV) {
my $json;
open(IN, "<", $_) or die "$!";
local $/;
$json = <IN>;
close IN;
my $data = JSON::Syck::Load($json) or next;
my @values = map { &json_node_at_path($data, $_) } @paths;
# output CSV spec
# - field separator = SPACE
# - record separator = LF
# - every field is quoted
local $, = qq( );
local $\ = qq(\n);
print map { s/"/""/og; q(").$_.q("); } @values;
sub json_node_at_path ($$) {
# $ : (reference) json object
# $ : (string) node path
# E.g. Given node path = '/abc/0/def', it returns either
# $obj->{'abc'}->[0]->{'def'} if $obj->{'abc'} is ARRAY; or
# $obj->{'abc'}->{'0'}->{'def'} if $obj->{'abc'} is HASH.
my ($obj, $path) = @_;
my $r = $obj;
for ( map { /(^.+$)/ } split /\//, $path ) {
if ( /^[0-9]+$/ && ref($r) eq 'ARRAY' ) {
$r = $r->[$_];
else {
$r = $r->{$_};
return $r;
EOF
For Automator workflow, you may use Run Shell Script action as follows, which will receive json files and yield out_YYYY-MM-DD_HHMMSS.csv on desktop.
Run Shell Script action
- Shell = /bin/bash
- Pass input = as arguments
- Code = as follows
#!/bin/bash
/usr/bin/perl -CSDA -w <<'EOF' - "$@" > ~/Desktop/out_"$(date '+%F_%H%M%S')".csv
use strict;
use JSON::Syck;
$JSON::Syck::ImplicitUnicode = 1;
# json node paths to extract
my @paths = ('/upload_date', '/title', '/webpage_url');
for (@ARGV) {
my $json;
open(IN, "<", $_) or die "$!";
local $/;
$json = <IN>;
close IN;
my $data = JSON::Syck::Load($json) or next;
my @values = map { &json_node_at_path($data, $_) } @paths;
# output CSV spec
# - field separator = SPACE
# - record separator = LF
# - every field is quoted
local $, = qq( );
local $\ = qq(\n);
print map { s/"/""/og; q(").$_.q("); } @values;
sub json_node_at_path ($$) {
# $ : (reference) json object
# $ : (string) node path
# E.g. Given node path = '/abc/0/def', it returns either
# $obj->{'abc'}->[0]->{'def'} if $obj->{'abc'} is ARRAY; or
# $obj->{'abc'}->{'0'}->{'def'} if $obj->{'abc'} is HASH.
my ($obj, $path) = @_;
my $r = $obj;
for ( map { /(^.+$)/ } split /\//, $path ) {
if ( /^[0-9]+$/ && ref($r) eq 'ARRAY' ) {
$r = $r->[$_];
else {
$r = $r->{$_};
return $r;
EOF
Tested under OS X 10.6.8.
Hope this may help,
H -
Extract SQL history from 10046 trace files
Hi all,
I need to extract the complete sql history from sql trace files to "debug" a client application.
I know I can read the raw trc file and rebuild the sql history looking for the PARSING / EXEC / FETCH entries.
However, this is a very long and boring manual task: do you know if there is some free tool to automate this task?
thanks
Andreauser585511 wrote:
I agree that the 10046 trace captures everything. If I do read the raw trc file I see the DML. The problem is that tkprof's record does not record the DML (maybe it thinks that some DML is recursive sql and it gets misleaded... I am not sure) so I am looking for an alternate tool to process 10046 trace files
Regards
AndreaReally?
Generate a trace of some dml:
oracle:orcl$
oracle:orcl$ sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Thu May 16 08:28:55 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> conn snuffy/snuffy
Connected.
SQL> alter session set tracefile_identifier = "snuffy_session";
Session altered.
SQL> alter session set events '10046 trace name context forever, level 12';
Session altered.
SQL> insert into mytest values (sysdate);
1 row created.
SQL> commit;
Commit complete.
SQL> ALTER SESSION SET EVENTS '10046 trace name context off';
Session altered.
SQL> exitrun tkprof on the trace
oracle:orcl$ ls -l $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace/*snuffy
*.trc
-rw-r----- 1 oracle asmadmin 3038 May 16 08:29 /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4086_snuffy_session.trc
oracle:orcl$ tkprof /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4086_snu
ffy_session.trc snuffy.rpt waits=YES sys=NO explain=system/halftrack
TKPROF: Release 11.2.0.1.0 - Development on Thu May 16 08:31:32 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.Look at the report:
oracle:orcl$ cat snuffy.rpt
TKPROF: Release 11.2.0.1.0 - Development on Thu May 16 08:31:32 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Trace file: /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4086_snuffy_session.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SQL ID: 938dgt554gu98
Plan Hash: 0
insert into mytest <<<<<<<<<<<<<<<< oh my! Here is the insert statement
values
(sysdate)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 5 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 1 5 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 86 (SNUFFY)
Rows Row Source Operation
0 LOAD TABLE CONVENTIONAL (cr=1 pr=0 pw=0 time=0 us)
error during execute of EXPLAIN PLAN statement
ORA-00942: table or view does not exist
parse error offset: 83
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 3.35 3.35
SQL ID: 23wm3kz7rps5y
Plan Hash: 0
commit
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 1 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 1 0
Misses in library cache during parse: 0
Parsing user id: 86 (SNUFFY)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 4.72 8.50
log file sync 1 0.00 0.00
SQL ID: 0kjg1c2g4gdcr
Plan Hash: 0
ALTER SESSION SET EVENTS '10046 trace name context off'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Parsing user id: 86 (SNUFFY)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 3 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 1 6 1
Fetch 0 0.00 0.00 0 0 0 0
total 6 0.00 0.00 0 1 6 1
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 4.72 11.86
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
3 user SQL statements in session.
0 internal SQL statements in session.
3 SQL statements in session.
0 statements EXPLAINed in this session.
Trace file: /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4086_snuffy_session.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
3 user SQL statements in trace file.
0 internal SQL statements in trace file.
3 SQL statements in trace file.
3 unique SQL statements in trace file.
58 lines in trace file.
8 elapsed seconds in trace file.
oracle:orcl$ -
How to extract Attribute Value from a DBC file with LabWindows and NI-XNET library
Hi all,
For my application, i would like to feed my LabWindows CVI Test program with data extracted from *.dbc file (created by another team under Vector CANdb++).
These files contains all CAN frame definition
and also some extra information added to :
Message level,
Signal level,
Network Level
These extra information are set by using specific ATTRIBUTE DEFINITIONS - FUNCTIONALITY under Vector CANdb++
The opening of the DataBase works under NI-XNET DataBase Editor as in LabWindows using: nxdbOpenDatabase ( ... )
No attribute seems be displayable under the NI-XNET DataBase Editor (it's not a problem for me)
Now, how, using the NI-XNET API and CVI, be able to extract these specially created attributes ?
Thanks in advance.
PS : In attached picture, a new attribute called Test_NI, connected to a message
Attachments:
EX1.jpg 36 KBHi Damien,
To answer your question on whether the XNET API on LabWindows/CVI allows you to gain access to the custom attributes in a DBC file, this is not a supported feature. The DBC format is proprietary from Vector. Also, custom attributes are different for all customers and manufacturers. Those two put together make it really difficult for NI to access them with an API that will be standard and reliable.
We do support common customer attributes for cyclic frames. This is from page 4-278 in the XNET Hardware and Software Manual :
"If you are using a CANdb (.dbc) database, this property is an optional attribute in the file. If NI-XNET finds an attribute named GenMsgSendType, that attribute is the default value of this property. If the GenMsgSendType attribute begins with cyclic, this property's default value is Cyclic Data; otherwise, it is Event Data. If the CANdb file does not use the GenMsgSendType attribute, this property uses a default value of Event Data, which you can change in your application. "
Link to the manual : http://digital.ni.com/manuals.nsf/websearch/32FCF9A42CFD324E8625760E00625940
Could you explain us the goal of this attribute, and what you need it on your application.
Thanks,
Christophe S.
FSE East of France І Certified LabVIEW Associate Developer І National Instruments France -
How to extract embedded images from a Pagemaker file
this is using version 6.5
select image.
File > Export > Graphic
set the file type and save>>says that this export technique does not work, it produces a low res screen version of the file.
Are you certain the original files were any better?
My preferred method of doing this works well, but it takes a few extra steps. I'd make a high-res PDF out of the PM file, then pick apart the PDF to extract the graphics. Are the graphics raster or vector? If they're raster, you can use Acrobat's Touch-up Object tool to open them in Photoshop. If they're vector, you can open the PDF in Illustrator and save out the graphics from there.
HTH -
Extract BPEL process from the JAR file deployed to BPEL Server
Hello All-
We have a BPEL process deployed in our production environment but unfortunately we do not have a back-up copy in our test environment.
I got the BPEL deployment JAR file from the production server but I am unable to extract the BPEL Process as well as the XSL file.
I used WINRAR to extract but it was showing that the JAR file is not a valid archive. We need the prod deployed version of the XSL as well as the BPEL process for a change and we are unable to proceed.
If anybody has faced similar kind of issue, please let me know how can it be resolved. Also please let me know if there are any tools which can extract the files.
PLease note that we are on BPEL 10.1.2.0.2
Appreciate your help and thanks in advance.
Thanks,
DibyaHi Dibya
jar -xvf <filename> will work as others said.
However, please make sure you have another TEST/DEV Environment running in par with PROD.
Always, suggested to test first on the TEST/DEV Env for applying any new patches/config changes...then appropriately migrate to actual PROD Environments.
Also, always take a complete backup before you do some R&D on the SOA server.
Regards
A -
Extracting a subtitle from a VOB file
OK, as much time as I have spent looking for the way to do this, I could have just watched the darn DVD while editing, BUT...
I'm remastering an old public domain movie in FCP. The titles... (this is a silent film, so the "titles" in this usage are the printed words on the screen that the actors are 'saying'. I'll use the word "intertitles" from now on.) are in French and I am producing this for English speakers.
I have a DVD of the same movie with English subtitles (from the actual DVD subtitle track) and I am using FCP to replace the French intertitle cards with English intertitle cards. (Is this clear as mud?)
Instead of having to watch the DVD in real time to get to the next intertitle card, it would be nice to extract the English subtitle track from the DVD and simply have it as either a text or image file that I could have up on my computer monitor, and just page or scroll to the next title.
I've looked at every freeware that claims to do this but nothing seems to work. (DSubtitler seems like it wants to, but the output file looks like this:
1
00:00:00,230 --> 00:00:00,250
2
00:00:06,230 --> 00:00:12,710
(PICTURE)
3
00:00:14,740 --> 00:00:18,989
(PICTURE)
4
00:00:19,750 --> 00:00:22,969
(PICTURE)
5
00:00:33,799 --> 00:00:39,570
(PICTURE)
ffmpegX wants me to re-encode the entire film in order to give me a subtitle, and even then, I'm not sure it would be what I'm looking for.
Even a quicktime file with a screen burn of the subtitle would suffice.
Thanks in advance for any suggestions!!
Dual G5 Mac OS X (10.3.9)yes, clear as mud...
honestly, unless you're doing this sort of job routinely, the fastest bet would be to open the english subtitled movie in dvd player, set your fast-foward speed to 16x, and just scrub thru the whole movie and jot the titles down in textedit when they pop up.
it just seems like you're trying to do more work than you have to.
i realize that's not exactly what you're looking for, but i hope it helped.
gardy -
How Do I Extract All Pages From A Pdf File and Turn Them All Into Separate Files
Hi i have downloaded 100 reports into a single pdf, that must be extracted into 100 seperate pdfs. A prety straightforward question, that i was not able to find a straighforward answer to.
Thanks for your help in advance!
Matti have Adobe Acrobat Pro, Version 10, would that work?
Mind you, I want to take the 100 pages and turn them into 100 pdf files, avoiding of course going through the process of doing it one page at a time, one page at a time, one hundred times lol -
Is there a way to Extract all formulas from Crystal Reports files?
Hi:
I have about 40 Crystal Reports (.rpt files). I need to go through each one and look at formulas if a certain variable/value is used since it has currently changed. It is taking way too long and not to mention inefficent to open each file, look through any place where there could be custom formulas such as when to suppress - etc. Is there are way to easily exact all custom formulas from all those report into say a Text file so I can do a Search for it? That would be the easiest way to find what I'm looking for.
I have Crystal Reports XI.
Thanks in advance for your help.One option is to use a tool such as rptInspector. See: [http://www.softwareforces.com/Product/ri/pro/3/rptInspector.htm|http://www.softwareforces.com/Product/ri/pro/3/rptInspector.htm]
Maybe you are looking for
-
SD-R6112 drive no longer recognized - Error Code 19 in device manager
Hello! I have two opticals (one internal, one external) that suddenly are unrecognized. Both are showing up in device manager with the same error message as follows: Windows cannot start this hardware device because its configuration information (in
-
How to Flatten/Print to PDF Multiple PDF Forms at Once?
Hi all! I receive multiple PDF forms per day. I file these forms electronically by using "print to Adobe PDF" (which flattens all the form fields). Is there a way to "print to Adobe PDF" multiple files at one? OR is there some other way to flatten fo
-
How to set the value of .lastvisible and .firstvisible
hi, in ITS i'm using a loop on a table control. The counter starts with GV_SOS_LIST_CTRL.firstvisible to GV_SOS_LIST_CTRL.lastvisible and displays 2 rows per navigation. How(or where) should i increase the value of GV_SOS_LIST_CTRL.lastvisible so tha
-
Dear Experts, Our client's terms of payment for their cutomers is three installment. Product worth INR 10000 + taxes worth INR 150 10% first installment of basic price 80% second instllment of basic price plus entire amount of taxes i.e. 150.00 10% t
-
Exporting photos with graphic??
Have a job upcoming which will involve a massive amount of photos, nothing to exciting, just volume. All of the selects will need to be resized to 4x7 and then have a 1x7 graphic added to the bottom each, so that the finished image fits a 5x7 (7in w