Is this really a better way of optimizing memory?

Hi Friends!
                  I am new to Labview and need some help. In a VI sescibed below I would like to know about optimizing the memory.
Problem: Needs to take the values of 7 physical quantities for three different time intervals. Each physical quantity has individual elements(which are clusters)  whose number actually depends on the user input ( can be of any number).
What I have done: I had first constructed an
1. Array(Let it be array1 for discussion)  of three elements(For three time intervals)  each element is a cluster(Let be Level 1 Cluster)
                2.Cluster(Level1) has 7 arrays(Let them be Level 2 arrays) to represent 7 physical quantities
            From the concepts of arrays in the C each element of the array occupies same memory space. But does this concept applicable here. As I have elements in a cluster(level1) as arrays(Level 2). Each one of the array(Level2) could be of different size depending on user input.
If there is another way to handle them please help me with ur valuable suggestion.
Siddhu.

Yikes!
If there was ever a time I could use Greg McKaskle's help it would be now.
I will do my best, so here goes.
App note 154 found here
http://zone.ni.com/devzone/conceptd.nsf/2d17d611ef​b58b22862567a9006ffe76/370dfc6fd19b318c86256a33006​...
says,
Arrays
LabVIEW stores arrays as handles, or pointers to pointers, that contain the size of each dimension of the array in 32-bit integers followed by the data.
and goes on to say
"Clusters
LabVIEW stores cluster elements of varying data types according to the cluster order. .... LabVIEW stores scalar data directly in the cluster. LabVIEW stores arrays, strings, and paths indirectly. The LabVIEW cluster stores a handle that points to the location in memory where the data is stored.
So after reading that (doesn't app note 154 just thrill you? ) and looking at you example I would be tempted to say that because you are asking about an
array of
clusters of
clusters of
arrays
then all we have is a bunch of pointers to data that is elsewhere and I would have to agree with vivi's posting.
But that is only talking about how the data is organized in memory and does take into concideration what LV is doing with that data and also draws a very fine line between THE CLUSTER adn the DATA IN THE CLUSTER.
In C you have to keep that stuff in mind. In LV, the data in memory is kinda worthless unless you "touch" it in some way.
To illustrate that clusters and arrys are the the handles+data I put together the following code.
As shown buffers are allocated in the initial frame and re-used the rest of the example.
I have configured the two controls "Main Array" and "Main Array2" such that the both contain a single element.
The single element in "Main Array" only has a single element in the array of the first of its sub-clusters.
The single element in "Main Array2" has 10,000 values in the array in the first sub-cluster.
So since I am doing everything "in-place" (not creating new buffers) and if copying an array element is simply a matter of copying a handle over-top of itself, THEN it should take the same amount of time to replace element 0 with element 0 for both "Main Array" and "Main Array2".
Running the VI shows that this is NOT true. The small array 50ms and the large about 9 seconds.
So...
While I am developing in LV I do not think about how the data structure is stored but more about how much memory is required to store the data in the wire, and try to minimize how many times that data is duplicated.
I expect there will be follow-up Q's so let them fly!
Ben
Message Edited by Ben on 11-10-2005 05:47 PM
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction
Attachments:
Memory_allocation.JPG ‏113 KB
Memory[1].vi ‏209 KB

Similar Messages

  • Is there any better way of optimizing memory?

    Hi Friends!
                      I am new to Labview and need some help. In a VI sescibed below I would like to know about optimizing the memory.
    Problem: Needs to take the values of 7 physical quantities for three different time intervals. Each physical quantity has individual elements(which are clusters)  whose number actually depends on the user input ( can be of any number).
    What I have done: I had first constructed an
    1. Array(Let it be array1 for discussion)  of three elements(For three time intervals)  each element is a cluster(Let be Level 1 Cluster)
                    2.Cluster(Level1) has 7 arrays(Let them be Level 2 arrays) to represent 7 physical quantities
                From the concepts of arrays in the C each element of the array occupies same memory space. But does this concept applicable here. As I have elements in a cluster(level1) as arrays(Level 2). Each one of the array(Level2) could be of different size depending on user input.
    If there is another way to handle them please help me with ur valuable suggestion.
    Siddhu.

    I don't quite understand why optimizing memory here is required as the
    data set does not seem very large.  That being said, clusters use
    (relatively) a lot of memory, and nested arrays/clusters are generally
    considered poor programming.  It seems to me that this can be
    handled with a 3D array (much more memory efficient):  3 pages for
    the time intervals, 7 rows for the physical quantities and an
    indeterminite number of columns for the individual elements.  How
    to handle the array depends on what are valid values for the individual
    elements.  If the data type is numeric and 0 is not a valid user
    input value, then it's easy.  Just build the array row by
    row.  Each row will have the number of columns equal to the
    largest number of individual elements entered.  Data that has not
    been entered by the user will be 0.  For example, at time interval
    1:
    PQ1: 9 2 7 4 1 8
    PQ2: 8 7 2
    PQ3: 1
    PQ4: 4 6 8 2
    PQ5: 2 3 4 5 6 7 8
    PQ6: 4 2 7 9
    PQ7: 2 4
    The first page of the array would be:
    9 2 7 4 1 8 0
    8 7 2 0 0 0 0
    1 0 0 0 0 0 0
    4 6 8 2 0 0 0
    2 3 4 5 6 7 8
    4 2 7 9 0 0 0
    2 4 0 0 0 0 0
    If the data type is not numeric or 0 is a valid value, I would
    initialize a 3D array that is 3 X 7 X (something larger than the max #
    of individual elements) with some sort of invalid value.  Then,
    as  the user inputs individual elements, replace the appropriate
    array element.  This also has the advantage of creating the array
    ahead of time so LabVIEW will not have to create any memory buffers.
    Hope this helps.
    Dave.
    ==============================================
    David Kaufman
    LabVIEW Certified Developer
    ==============================================

  • Is this really the only way to download a CD to iTunes? It's way too slow...

    Hi, I have tons of CDs I want to upload into iTunes using my new Dell computer. I just inserted my first CD. It has 11 songs and says it will take 39.8 minutes and is taking as long to download each song as it does to listen to it. This is not going to be possible to do with hundreds of CDs I have. Should I go back to using my Microsoft Media Player? That takes just a few seconds to download on. I don't understand why iTunes takes for eternity....been over 20 minutes barely through the first half of the fist CD...will take years at this pace. Thoughts??? Did you go through this too? I already give up after midway through my first CD, can't use itunes if it's going to take this long to transfer my music onto it...

    So...basically I just want to know if this is normal, to take this long to transfer music onto iTunes.
    1. It took 40 minutes to upload a 40 minute CD onto iTunes.
    2. It is taking equally long to transfer music that I drag from my Microsoft music library onto iTunes. I started doing this hours ago today and am only on the 4th CD (have hundreds so how long is this going to take? Is it normal to take this long?)
    I had Microsoft Music on my old Dell computer from 2001, I NEVER had to update it, and it took seconds to upload each song, not minutes like iTunes....and that was on my old computer...I just bought a new one I'm using, had to re-install iTunes to the latest version and for the first time downloading CD/s/existing music onto it and it's taking forever. Is it because of the latest 10.5 version? Is it normal? Is everyone else experiencing this? Is it a problem with my computer or iTunes setting? Am I doing something wrong? Please if anyone can share their experience for my reference I would really appreciate it, thanks.

  • HT5139 Is this really the only way to back up service settings?

    Am I completely missing something here, or is Time Machine supposed to be the only easy way to back up service settings in Mountain Lion with Server.app? Is there really no way to export/import service settings anymore? It sounds to me from what I've read and seen of Server.app that if, for example, my DHCP settings get mucked up, I have to take the entire server offline, boot to the recovery partition, and restore from Time Machine? I can't just stop that ONE service and re-import the settings while leaving everything else up and running for 100+ users? Someone please tell me I'm missing something obvious.

    Which service?
    The "serveradmin" command line controls are always available. They can do much more than the GUI can.
    http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/ man8/serveradmin.8.html
    http://manuals.info.apple.com/en_US/MacOSXSrvr10.3_CommandLineAdminGuide.pdf

  • Is this really the best way Adobe can find to help customers - ask a bunch of stingers?

    ?

    Adobe has a number of ways to help customers... I don't think one could be considered best compared to others... though the one that gives the help that's needed could be considered good enough.  Do you need help?

  • Is this REALLY the easiest/best way to do this?

    I'm no SQL guru by any means, but I can get by. Here is some background.
    I am running Perfmon Data Collector Sets to collect performance counters.
    I am using Relog.exe to take the .blg files and pushing them into a database
    I am attempting to write a query that will ultimately get the information out of the database and see if I can aggregate the data a bit as well. If the last part is better off doing in Excel (where this data will ultimately end up for charting), then I will
    do that part there.
    Here is the table layouts when you import the .blg files:
    Table name: CounterData
    Columns:
    GUID
    CounterID
    RecordIndex
    CounterDateTime
    CounterValue
    FirstValueA
    FirstValueB
    SecondValueA
    SecondValueB
    MultiCount
    Table Name: CounterDetails
    CounterID
    MachineName
    ObjectName
    CounterName
    CounterType
    DefaultScale
    InstanceName
    InstanceIndex
    ParentName
    ParentObjectID
    TimeBaseA
    TimeBaseB
    I need to pull multiple sets of data out of these tables, so I built the following query:
    USE PDB
    SELECT
    CAST(LEFT(CounterDateTime, 16) as smalldatetime) AS CounterDateTime,
    REPLACE(CounterDetails.MachineName,'\\','') AS ComputerName,
    CounterDetails.ObjectName + ISNULL('(' + CounterDetails.InstanceName + ')','') + '\' + CounterDetails.CounterName AS [Counter],
    CounterData.CounterValue
    FROM CounterData
    FULL OUTER JOIN CounterDetails ON CounterData.CounterID = CounterDetails.CounterID
    FULL OUTER JOIN DisplayToID ON CounterData.GUID = DisplayToID.GUID
    WHERE CounterDetails.ObjectName = 'Processor'
    AND CounterDetails.CounterName = '% Processor Time'
    AND CounterDetails.InstanceName = '_Total'
    UNION
    SELECT
    CAST(LEFT(CounterDateTime, 16) as smalldatetime) AS CounterDateTime,
    REPLACE(CounterDetails.MachineName,'\\','') AS ComputerName,
    CounterDetails.ObjectName + ISNULL('(' + CounterDetails.InstanceName + ')','') + '\' + CounterDetails.CounterName AS [Counter],
    CounterData.CounterValue
    FROM CounterData
    FULL OUTER JOIN CounterDetails ON CounterData.CounterID = CounterDetails.CounterID
    FULL OUTER JOIN DisplayToID ON CounterData.GUID = DisplayToID.GUID
    WHERE CounterDetails.ObjectName = 'Web Service'
    AND CounterDetails.CounterName = 'Bytes Received/sec'
    AND CounterDetails.InstanceName = 'AppName'
    ORDER BY 1
    Is this REALLY the best way to do this?
    Also, trying to figure out how to build in data aggregation in 5 minute average blocks.
    Any assistance appreciated!

    Erland,
    Thanks for your response.
    The related items in the table are CounterID, in a nutshell, what I want to do is the following:
    Get the data from the CounterDetails table that holds the MachineName, CounterName, CounterType and so forth.
    CounterID MachineName ObjectName CounterName CounterType DefaultScale InstanceName InstanceIndex ParentName ParentObjectID TimeBaseA TimeBaseB
    1 \\ServerName Web Service Bytes Received/sec 272696576 -4 Carnival NULL NULL NULL 14318180 0
    2 \\ServerName Web Service Bytes Sent/sec 272696576 -4 AppName NULL NULL NULL 14318180 0
    3 \\ServerName Web Service Current Connections 65536 0 AppName NULL NULL NULL 14318180 0
    4 \\ServerName Web Service Bytes Total/sec 272696576 -4 AppName NULL NULL NULL 14318180 0
    5 \\ServerName Web Service Get Requests/sec 272696320 0 AppName NULL NULL NULL 14318180 0
    6 \\ServerName Web Service Head Requests/sec 272696320 0 AppName NULL NULL NULL 14318180 0
    7 \\ServerName Processor % Processor Time 558957824 0 _Total NULL NULL NULL 10000000 0
    8 \\ServerName PhysicalDisk Disk Reads/sec 272696320 0 _Total NULL NULL -1 14318180 0
    9 \\ServerName PhysicalDisk Disk Reads/sec 272696320 0 0 C: D: E: NULL NULL -1 14318180 0
    10 \\ServerName PhysicalDisk Disk Read Bytes/sec 272696576 -4 _Total NULL NULL -1 14318180 0
    11 \\ServerName PhysicalDisk Disk Read Bytes/sec 272696576 -4 0 C: D: E: NULL NULL -1 14318180 0
    12 \\ServerName PhysicalDisk Disk Writes/sec 272696320 0 _Total NULL NULL -1 14318180 0
    13 \\ServerName PhysicalDisk Disk Writes/sec 272696320 0 0 C: D: E: NULL NULL -1 14318180 0
    14 \\ServerName PhysicalDisk Current Disk Queue Length 65536 1 _Total NULL NULL -1 14318180 0
    15 \\ServerName PhysicalDisk Current Disk Queue Length 65536 1 0 C: D: E: NULL NULL -1 14318180 0
    16 \\ServerName PhysicalDisk % Disk Read Time 542573824 0 _Total NULL NULL -1 10000000 0
    17 \\ServerName PhysicalDisk % Disk Read Time 542573824 0 0 C: D: E: NULL NULL -1 10000000 0
    18 \\ServerName PhysicalDisk Disk Write Bytes/sec 272696576 -4 _Total NULL NULL -1 14318180 0
    19 \\ServerName PhysicalDisk Disk Write Bytes/sec 272696576 -4 0 C: D: E: NULL NULL -1 14318180 0
    20 \\ServerName PhysicalDisk Disk Transfers/sec 272696320 0 _Total NULL NULL -1 14318180 0
    21 \\ServerName PhysicalDisk Disk Transfers/sec 272696320 0 0 C: D: E: NULL NULL -1 14318180 0
    22 \\ServerName PhysicalDisk % Disk Write Time 542573824 0 _Total NULL NULL -1 10000000 0
    23 \\ServerName PhysicalDisk % Disk Write Time 542573824 0 0 C: D: E: NULL NULL -1 10000000 0
    24 \\ServerName Network Interface Bytes Received/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter NULL NULL -1 14318180 0
    25 \\ServerName Network Interface Bytes Received/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] NULL NULL -1 14318180 0
    26 \\ServerName Network Interface Bytes Received/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _3 NULL NULL -1 14318180 0
    27 \\ServerName Network Interface Bytes Received/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _2 NULL NULL -1 14318180 0
    28 \\ServerName Network Interface Bytes Received/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _4 NULL NULL -1 14318180 0
    29 \\ServerName Network Interface Bytes Received/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _4 NULL NULL -1 14318180 0
    30 \\ServerName Network Interface Bytes Received/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _3 NULL NULL -1 14318180 0
    31 \\ServerName Network Interface Bytes Received/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _2 NULL NULL -1 14318180 0
    32 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter NULL NULL -1 14318180 0
    33 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] NULL NULL -1 14318180 0
    34 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _3 NULL NULL -1 14318180 0
    35 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _2 NULL NULL -1 14318180 0
    36 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _4 NULL NULL -1 14318180 0
    37 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _4 NULL NULL -1 14318180 0
    38 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _3 NULL NULL -1 14318180 0
    39 \\ServerName Network Interface Bytes Sent/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _2 NULL NULL -1 14318180 0
    40 \\ServerName Network Interface Bytes Total/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter NULL NULL -1 14318180 0
    41 \\ServerName Network Interface Bytes Total/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] NULL NULL -1 14318180 0
    42 \\ServerName Network Interface Bytes Total/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _3 NULL NULL -1 14318180 0
    43 \\ServerName Network Interface Bytes Total/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _2 NULL NULL -1 14318180 0
    44 \\ServerName Network Interface Bytes Total/sec 272696576 -4 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _4 NULL NULL -1 14318180 0
    45 \\ServerName Network Interface Bytes Total/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _4 NULL NULL -1 14318180 0
    46 \\ServerName Network Interface Bytes Total/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _3 NULL NULL -1 14318180 0
    47 \\ServerName Network Interface Bytes Total/sec 272696576 -4 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _2 NULL NULL -1 14318180 0
    48 \\ServerName Network Interface Output Queue Length 65792 0 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter NULL NULL -1 14318180 0
    49 \\ServerName Network Interface Output Queue Length 65792 0 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] NULL NULL -1 14318180 0
    50 \\ServerName Network Interface Output Queue Length 65792 0 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _3 NULL NULL -1 14318180 0
    51 \\ServerName Network Interface Output Queue Length 65792 0 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _2 NULL NULL -1 14318180 0
    52 \\ServerName Network Interface Output Queue Length 65792 0 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _4 NULL NULL -1 14318180 0
    53 \\ServerName Network Interface Output Queue Length 65792 0 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _4 NULL NULL -1 14318180 0
    54 \\ServerName Network Interface Output Queue Length 65792 0 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _3 NULL NULL -1 14318180 0
    55 \\ServerName Network Interface Output Queue Length 65792 0 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _2 NULL NULL -1 14318180 0
    56 \\ServerName Network Interface Current Bandwidth 65792 -6 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter NULL NULL -1 14318180 0
    57 \\ServerName Network Interface Current Bandwidth 65792 -6 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] NULL NULL -1 14318180 0
    58 \\ServerName Network Interface Current Bandwidth 65792 -6 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _3 NULL NULL -1 14318180 0
    59 \\ServerName Network Interface Current Bandwidth 65792 -6 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _2 NULL NULL -1 14318180 0
    60 \\ServerName Network Interface Current Bandwidth 65792 -6 TEAM : Team _0 - Intel[R] PRO_1000 PT Dual Port Server Adapter _4 NULL NULL -1 14318180 0
    61 \\ServerName Network Interface Current Bandwidth 65792 -6 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _4 NULL NULL -1 14318180 0
    62 \\ServerName Network Interface Current Bandwidth 65792 -6 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _3 NULL NULL -1 14318180 0
    63 \\ServerName Network Interface Current Bandwidth 65792 -6 Broadcom BCM5708C NetXtreme II GigE [NDIS VBD Client] _2 NULL NULL -1 14318180 0
    64 \\ServerName Memory % Committed Bytes In Use 537003008 0 NULL NULL NULL NULL 14318180 0
    65 \\ServerName Memory Available MBytes 65792 0 NULL NULL NULL NULL 14318180 0
    66 \\ServerName Memory Committed Bytes 65792 -6 NULL NULL NULL NULL 14318180 0
    67 \\ServerName ASP.NET Requests Current 65536 -1 NULL NULL NULL NULL 14318180 0
    68 \\ServerName ASP.NET Worker Process Restarts 65536 -1 NULL NULL NULL NULL 14318180 0
    69 \\ServerName ASP.NET Applications Running 65536 -1 NULL NULL NULL NULL 14318180 0
    70 \\ServerName ASP.NET Requests Queued 65536 -1 NULL NULL NULL NULL 14318180 0
    71 \\ServerName ASP.NET Application Restarts 65536 -1 NULL NULL NULL NULL 14318180 0
    72 \\ServerName ASP.NET Worker Processes Running 65536 -1 NULL NULL NULL NULL 14318180 0
    73 \\ServerName ASP.NET Request Execution Time 65536 -1 NULL NULL NULL NULL 14318180 0
    Put that together with the Actual CounterData in the appropriately named table.
    GUID CounterID RecordIndex CounterDateTime CounterValue FirstValueA FirstValueB SecondValueA SecondValueB MultiCount
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 1 2015-04-30 13:01:17.165 0 -1927979745 2 -598728243 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 2 2015-04-30 13:01:22.173 67581.3633745449 -1927642227 2 -527219720 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 3 2015-04-30 13:01:27.165 94686.4063445727 -1927169543 2 -455741935 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 4 2015-04-30 13:01:32.172 152203.041104636 -1926407371 2 -384042212 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 5 2015-04-30 13:01:37.180 165447.09804292 -1925578898 2 -312344215 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 6 2015-04-30 13:01:42.172 171837.776053684 -1924721043 2 -240864459 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 7 2015-04-30 13:01:47.180 173383.630948422 -1923852824 2 -169166134 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 8 2015-04-30 13:01:52.172 144598.914838348 -1923130989 2 -97690055 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 9 2015-04-30 13:01:57.179 174737.727857169 -1922255986 2 -25991455 706 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 10 2015-04-30 13:02:02.171 169321.861725293 -1921410708 2 45486867 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 11 2015-04-30 13:02:07.179 208117.127073016 -1920368562 2 117185118 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 12 2015-04-30 13:02:12.171 141008.757157554 -1919664632 2 188662923 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 13 2015-04-30 13:02:17.178 149495.458222544 -1918916026 2 260361927 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 14 2015-04-30 13:02:22.170 174341.539879002 -1918045605 2 331847154 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 15 2015-04-30 13:02:27.178 143957.916530014 -1917324818 2 403537258 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 16 2015-04-30 13:02:32.170 130619.882518362 -1916672720 2 475018386 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 17 2015-04-30 13:02:37.178 142332.32395318 -1915959971 2 546718673 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 18 2015-04-30 13:02:42.170 184550.944997403 -1915038722 2 618192753 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 19 2015-04-30 13:02:47.177 154267.317838657 -1914266244 2 689889592 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 20 2015-04-30 13:02:52.169 149238.629713526 -1913521218 2 761368514 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 21 2015-04-30 13:02:57.177 188584.542348845 -1912576880 2 833066869 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 22 2015-04-30 13:03:02.169 176918.705027469 -1911693639 2 904548308 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 23 2015-04-30 13:03:07.176 188369.179859497 -1910750431 2 976242743 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 24 2015-04-30 13:03:12.168 148606.905306921 -1910008537 2 1047723754 707 1
    8ADCC3A7-4D90-45A3-B912-FB18C9CB3646 1 25 2015-04-30 13:03:17.176 200078.077196397 -1909006668 2 1119420468 707 1
    The query above is working, but I feel there is a better way to get this done.
    Sample Output:
    CounterDateTime ComputerName Counter CounterValue
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23836753920
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23837396992
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23842693120
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23843172352
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23861657600
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23872827392
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23909138432
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23960690688
    2015-04-30 13:01:00 ServerName Memory\Committed Bytes 23972872192
    2015-04-30 13:01:00 ServerName PhysicalDisk(_Total)\Current Disk Queue Length 0
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 0
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 8.65725297547727
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 9.34837740384615
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 10.45515625
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 11.3926622596154
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 11.4480309928908
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 11.8893621695024
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 12.3306933461139
    2015-04-30 13:01:00 ServerName Processor(_Total)\% Processor Time 13.3301821231728
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 0
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 67581.3633745449
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 94686.4063445727
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 144598.914838348
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 152203.041104636
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 165447.09804292
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 171837.776053684
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 173383.630948422
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Received/sec 174737.727857169
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 0
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 354821.47994974
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 533111.927106303
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 849787.130317823
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 1015485.82303199
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 1286054.48388504
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 1528398.33137765
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 1600789.68540725
    2015-04-30 13:01:00 ServerName Web Service(AppName)\Bytes Total/sec 1690894.89372096
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23781527552
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23802056704
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23803797504
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23821389824
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23831420928
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23835803648
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23850049536
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23863857152
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23875534848
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23917281280
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23933739008
    2015-04-30 13:02:00 ServerName Memory\Committed Bytes 23978917888
    I hope this additional information I have provided will help. :)
    Thanks for your time!

  • Need help to get alternate or better way to write query

    Hi,
    I am on Oracle 11.2
    DDL and sample data
    create table tab1 -- 1 millions rows at any given time
    id       number       not null,
    ref_cd   varchar2(64) not null,
    key      varchar2(44) not null,
    ctrl_flg varchar2(1),
    ins_date date
    create table tab2 -- close to 100 million rows
    id       number       not null,
    ref_cd   varchar2(64) not null,
    key      varchar2(44) not null,
    ctrl_flg varchar2(1),
    ins_date date,
    upd_date date
    insert into tab1 values (1,'ABCDEFG', 'XYZ','Y',sysdate);
    insert into tab1 values (2,'XYZABC', 'DEF','Y',sysdate);
    insert into tab1 values (3,'PORSTUVW', 'ABC','Y',sysdate);
    insert into tab2 values (1,'ABCDEFG', 'WYZ','Y',sysdate);
    insert into tab2 values (2,'tbVCCmphEbOEUWbxRKczvsgmzjhROXOwNkkdxWiPqDgPXtJhVl', 'ABLIOWNdj','Y',sysdate);
    insert into tab2 values (3,'tbBCFkphEbOEUWbxATczvsgmzjhRQWOwNkkdxWiPqDgPXtJhVl', 'MQLIOWNdj','Y',sysdate);I need to get all rows from tab1 that does not match tab2 and any row from tab1 that matches ref_cd in tab2 but key is different.
    Expected Query output
    'ABCDEFG',  'WYZ'
    'XYZABC',   'DEF'
    'PORSTUVW', 'ABC'Existing Query
    select
       ref_cd,
       key
    from
        select
            ref_cd,
            key
        from
            tab1, tab2
        where
            tab1.ref_cd = tab2.ref_cd and
            tab1.key    <> tab2.key
        union
        select
            ref_cd,
            key
        from
            tab1
        where
            not exists
               select 1
               from
                   tab2
               where
                   tab2.ref_cd = tab1.ref_cd
        );I am sure there will be an alternate way to write this query in better way. Appreciate if any of you gurus suggest alternative solution.
    Thanks in advance.

    Hi,
    user572194 wrote:
    ... DDL and sample data ...
    create table tab2 -- close to 100 million rows
    id       number       not null,
    ref_cd   varchar2(64) not null,
    key      varchar2(44) not null,
    ctrl_flg varchar2(1),
    ins_date date,
    upd_date date
    insert into tab2 values (1,'ABCDEFG', 'WYZ','Y',sysdate);
    insert into tab2 values (2,'tbVCCmphEbOEUWbxRKczvsgmzjhROXOwNkkdxWiPqDgPXtJhVl', 'ABLIOWNdj','Y',sysdate);
    insert into tab2 values (3,'tbBCFkphEbOEUWbxATczvsgmzjhRQWOwNkkdxWiPqDgPXtJhVl', 'MQLIOWNdj','Y',sysdate);
    Thanks for posting the CREATE TABLE and INSERT statments. Remember why you go to all that trouble: so the people whop want to help you can re-create the problem and test their ideas. When you post statemets that don't work, it's just a waste of time.
    None of the INSERT statements for tab2 work. Tab2 has 6 columns, but the INSERT statements only have 5 values.
    Please test your code before you post it.
    I need to get all rows from tab1 that does not match tab2 WHat does "match" mean in this case? Does it mean that tab1.ref_cd = tab2.ref_cd?
    and any row from tab1 that matches ref_cd in tab2 but key is different.
    Existing Query
    select
    ref_cd,
    key
    from
    select
    ref_cd,
    key
    from
    tab1, tab2
    where
    tab1.ref_cd = tab2.ref_cd and
    tab1.key    <> tab2.key
    union
    select
    ref_cd,
    key
    from
    tab1
    where
    not exists
    select 1
    from
    tab2
    where
    tab2.ref_cd = tab1.ref_cd
    Does that really work? In the first branch of the UNION, you're referencing a column called key, but both tables involved have columns called key. I would expect that to cause an error.
    Please test your code before you post it.
    Right before UNION, did you mean
    tab1.key    != tab2.key? As you may have noticed, this site doesn't like to display the &lt;&gt; inequality operator. Always use the other (equivalent) inequality operator, !=, when posting here.
    I am sure there will be an alternate way to write this query in better way. Appreciate if any of you gurus suggest alternative solution.Avoid UNION; it can be very inefficient.
    Maybe you want something like this:
    SELECT       tab1.ref_cd
    ,       tab1.key
    FROM           tab1
    LEFT OUTER JOIN  tab2  ON  tab2.ref_cd     = tab1.ref_cd
    WHERE       tab2.ref_cd  IS NULL
    OR       tab2.key     != tab1.key
    ;

  • Callouts and anchored objects - there must be a better way to do this

    I've spent a lot of time in the last six months rebuilding PDF files in InDesign. It's part of my ordinary responsibilities, but I'm doing a lot more of it for some reason. Because I'm sending the text of these rebuild documents out for translation, I like to keep all of the text in a single story. It really helps to have the text in "logical order," I think; when I'm prepping a trifold brochure, I try pretty hard to make sure that the order in which the readers will read the text is duplicated in the flow of the story throughout the ID document.
    So, I'm rebuilding a manual that has a 3-column format on lettersize paper, and it's full of callouts. Chock full of 'em. They're not pull quotes, either; each of these things has unique text. Keeping in mind that I'd like the text in these callouts to remain in the same position in the text once I've linked all the stories and exported an RTF for translation, what's the best way to handle them? What I've been doing is inserting an emptly stroked frame as an anchored object, sized and positioned to sit above the text that is supposed to be called out. When my translations come back, they're always longer than the source document, so as I crawl through the text, I resize the anchored frames to match the size and position of the newly expanded translated text, and then nudge them into place with the keyboard.
    There Has To Be a Better Way.
    There is a better way, right? I'm not actually too sure. If I want to actually fill those anchored frames with text, I can't thread them into the story. I suppose that I could just thread the callout frames and assign two RTFs for translation instead of one, but then the "logical order" of my text is thrown out the window. So, I'm down to asking myself "what's more important? reduction of formatting time or maintenance of the flow of the story?" If there's something I'm missing that would let me dodge this decision, I'd love to hear about it. The only thing I can think of would work like this:
    1) Duplicate callout text in the story with a custom swatch "Invisible"
    2) Create "CalloutText" parastyle with "Invisible" swatch and apply it to callout text
    3) Insert anchor for anchored frame immediately before the CalloutText content
    4) Send it out for translation
    5) While I'm waiting for it to come back, write a script that would (dunno if this is possible):
       a) Step through the main story looking for any instance of CalloutText
       b) Copy one continguous instance of that style to the clipboard
       c) Look back in the story for the first anchor preceeding the instance of CalloutText
       d) Fill the anchored object with the text from the clipboard (this is where I'm really clueless)
       e) Apply a new parastyle to the text in the callout
       f) Continue stepping through the story looking for further instances of CalloutText
    If this really is the only decent solution, I'll just head over to the Scripting forum for some help with d). Can any of you make other suggestions?

    In-Tools.com wrote:
    The use of Side Heads saves weeks of manual labor.
    Yup, Harbs, that is exactly what I was describing. If I use the Side Heads plugin to set up a job, will my clients get a missing plug-in warning when they open up the INDD? Will roundtripping through INX strip the plugin but leave the text in the callout? (My clients don't care if the logical flow of the story is broken; it's just me.)
    I'm just curious; seems like a pretty obvious purchase to me. I'll probably try to script a solution anyways, after I buy the plugin; that way I get to learn about handling anchored objects in scripts AND deliver the job on time!

  • Is there a better way to do this projection/aggregate query?

    Hi,
    Summary:
    Can anyone offer advice on how best to use JDO to perform
    projection/aggregate queries? Is there a better way of doing what is
    described below?
    Details:
    The web application I'm developing includes a GUI for ad-hoc reports on
    JDO's. Unlike 3rd party tools that go straight to the database we can
    implement business rules that restrict access to objects (by adding extra
    predicates) and provide extra calculated fields (by adding extra get methods
    to our JDO's - no expression language yet). We're pleased with the results
    so far.
    Now I want to make it produce reports with aggregates and projections
    without instantiating JDO instances. Here is an example of the sort of thing
    I want it to be capable of doing:
    Each asset has one associated t.description and zero or one associated
    d.description.
    For every distinct combination of t.description and d.description (skip
    those for which there are no assets)
    calculate some aggregates over all the assets with these values.
    and here it is in SQL:
    select t.description type, d.description description, count(*) count,
    sum(a.purch_price) sumPurchPrice
    from assets a
    left outer join asset_descriptions d
    on a.adesc_no = d.adesc_no,
    asset_types t
    where a.atype_no = t.atype_no
    group by t.description, d.description
    order by t.description, d.description
    it takes <100ms to produce 5300 rows from 83000 assets.
    The nearest I have managed with JDO is (pseodo code):
    perform projection query to get t.description, d.description for every asset
    loop on results
    if this is first time we've had this combination of t.description,
    d.description
    perform aggregate query to get aggregates for this combination
    The java code is below. It takes about 16000ms (with debug/trace logging
    off, c.f. 100ms for SQL).
    If the inner query is commented out it takes about 1600ms (so the inner
    query is responsible for 9/10ths of the elapsed time).
    Timings exclude startup overheads like PersistenceManagerFactory creation
    and checking the meta data against the database (by looping 5 times and
    averaging only the last 4) but include PersistenceManager creation (which
    happens inside the loop).
    It would be too big a job for us to directly generate SQL from our generic
    ad-hoc report GUI, so that is not really an option.
    KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
    q1.setResult(
    "assetType.description, assetDescription.description");
    q1.setOrdering(
    "assetType.description ascending,
    assetDescription.description ascending");
    KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
    q2.setResult("count(purchPrice), sum(purchPrice)");
    q2.declareParameters(
    "String myAssetType, String myAssetDescription");
    q2.setFilter(
    "assetType.description == myAssetType &&
    assetDescription.description == myAssetDescription");
    q2.compile();
    Collection results = (Collection) q1.execute();
    Set distinct = new HashSet();
    for (Iterator i = results.iterator(); i.hasNext();) {
    Object[] cols = (Object[]) i.next();
    String assetType = (String) cols[0];
    String assetDescription = (String) cols[1];
    String type_description =
    assetDescription != null
    ? assetType + "~" + assetDescription
    : assetType;
    if (distinct.add(type_description)) {
    Object[] cols2 =
    (Object[]) q2.execute(assetType,
    assetDescription);
    // System.out.println(
    // "type "
    // + assetType
    // + ", description "
    // + assetDescription
    // + ", count "
    // + cols2[0]
    // + ", sum "
    // + cols2[1]);
    q2.closeAll();
    q1.closeAll();

    Neil,
    It sounds like the problem that you're running into is that Kodo doesn't
    yet support the JDO2 grouping constructs, so you're doing your own
    grouping in the Java code. Is that accurate?
    We do plan on adding direct grouping support to our aggregate/projection
    capabilities in the near future, but as you've noticed, those
    capabilities are not there yet.
    -Patrick
    Neil Bacon wrote:
    Hi,
    Summary:
    Can anyone offer advice on how best to use JDO to perform
    projection/aggregate queries? Is there a better way of doing what is
    described below?
    Details:
    The web application I'm developing includes a GUI for ad-hoc reports on
    JDO's. Unlike 3rd party tools that go straight to the database we can
    implement business rules that restrict access to objects (by adding extra
    predicates) and provide extra calculated fields (by adding extra get methods
    to our JDO's - no expression language yet). We're pleased with the results
    so far.
    Now I want to make it produce reports with aggregates and projections
    without instantiating JDO instances. Here is an example of the sort of thing
    I want it to be capable of doing:
    Each asset has one associated t.description and zero or one associated
    d.description.
    For every distinct combination of t.description and d.description (skip
    those for which there are no assets)
    calculate some aggregates over all the assets with these values.
    and here it is in SQL:
    select t.description type, d.description description, count(*) count,
    sum(a.purch_price) sumPurchPrice
    from assets a
    left outer join asset_descriptions d
    on a.adesc_no = d.adesc_no,
    asset_types t
    where a.atype_no = t.atype_no
    group by t.description, d.description
    order by t.description, d.description
    it takes <100ms to produce 5300 rows from 83000 assets.
    The nearest I have managed with JDO is (pseodo code):
    perform projection query to get t.description, d.description for every asset
    loop on results
    if this is first time we've had this combination of t.description,
    d.description
    perform aggregate query to get aggregates for this combination
    The java code is below. It takes about 16000ms (with debug/trace logging
    off, c.f. 100ms for SQL).
    If the inner query is commented out it takes about 1600ms (so the inner
    query is responsible for 9/10ths of the elapsed time).
    Timings exclude startup overheads like PersistenceManagerFactory creation
    and checking the meta data against the database (by looping 5 times and
    averaging only the last 4) but include PersistenceManager creation (which
    happens inside the loop).
    It would be too big a job for us to directly generate SQL from our generic
    ad-hoc report GUI, so that is not really an option.
    KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
    q1.setResult(
    "assetType.description, assetDescription.description");
    q1.setOrdering(
    "assetType.description ascending,
    assetDescription.description ascending");
    KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
    q2.setResult("count(purchPrice), sum(purchPrice)");
    q2.declareParameters(
    "String myAssetType, String myAssetDescription");
    q2.setFilter(
    "assetType.description == myAssetType &&
    assetDescription.description == myAssetDescription");
    q2.compile();
    Collection results = (Collection) q1.execute();
    Set distinct = new HashSet();
    for (Iterator i = results.iterator(); i.hasNext();) {
    Object[] cols = (Object[]) i.next();
    String assetType = (String) cols[0];
    String assetDescription = (String) cols[1];
    String type_description =
    assetDescription != null
    ? assetType + "~" + assetDescription
    : assetType;
    if (distinct.add(type_description)) {
    Object[] cols2 =
    (Object[]) q2.execute(assetType,
    assetDescription);
    // System.out.println(
    // "type "
    // + assetType
    // + ", description "
    // + assetDescription
    // + ", count "
    // + cols2[0]
    // + ", sum "
    // + cols2[1]);
    q2.closeAll();
    q1.closeAll();

  • Is there a better way of doing this?

    Finally managed to get navigation to work the way I want!!!
    But not being an expert at Oracle PL/SQL i'm sure this code could be written a better way.
    I'm currently looping through the sublinks twice, first time to get a count second time to display the actual links. the reason for this is the bottom link needs a different style. Anyway here is the code I am using:
    <oracle>
    declare
         x number;
         y number;
    begin
         x := 0;
         y := 0;
         for c1 in (
         select id, display_name, name
         from #owner#.WWSBR_ALL_FOLDERS
         where parent_id = #PAGE.PAGEID# and caid=1917 and DISPLAY_IN_PARENT_FOLDER = 1
         order by display_name
         loop
                   x := x+1;
         end loop;     
         htp.p('<tr><td id="sidenavtop"><strong>#TITLE#</strong></td></tr>');
         for c1 in (
         select id, display_name, name
         from #owner#.WWSBR_ALL_FOLDERS
         where parent_id = #PAGE.PAGEID# and caid=1917 and DISPLAY_IN_PARENT_FOLDER = 1
         order by display_name
         loop
                   y := y+1;
                   if x = y then
                        htp.p('<TR><TD id="sidenavbottom">'||c1.display_name||'</TD></TR>');
                   else
                        htp.p('<TR><TD id="sidenavitem">'||c1.display_name||'</TD></TR>');                    
                   end if;
         end loop;
    end;
    </oracle>

    Well, you could fetch the count into a local variable, e.g.
    SELECT count(*)
    INTO x
    FROM ...
    WHERE ...;
    and move on, but then you are doing two fetches. I'm really sleepy at the moment, so it's possible this is logically and syntactically fouled up, but another option may be:
    DECLARE
    CURSOR c1 IS
    select id, display_name, name
    from #owner#.WWSBR_ALL_FOLDERS
    where parent_id = #PAGE.PAGEID# and caid=1917 and DISPLAY_IN_PARENT_FOLDER = 1
    order by display_name;
    r1 c1%ROWTYPE;
    l_display_name wwsbr_all_folders.display_name%TYPE;
    BEGIN
    htp.p('<tr><td id="sidenavtop">#TITLE#</td></tr>');
    OPEN c1;
    FETCH c1 INTO r1;
    l_display_name := r1.display_name;
    --hang on to the display name
    WHILE c1%FOUND LOOP
    FETCH c1 INTO r1;
    --see if there's another row...
    IF c1%FOUND THEN
    --if so, ouput the current value of l_display_name as sidenavitem
    htp.p('<TR><TD id="sidenavitem">'|| l_display_name||'</TD></TR>');
    l_display_name := r1.display_name;
    ELSE
    --if not, output the current value of l_display_name as sidenavbottom
    htp.p('<TR><TD id="sidenavbottom">'|| l_display_name||'</TD></TR>');
    END IF;
    END LOOP;
    CLOSE c1;
    end;
    Hope this helps!
    -John
    Message was edited by:
    John Hopkins

  • A better way to do this ?

    where does the sql stuff excute in the following stored procedure, directly in the database or it goes through the oracle VM first ?
    CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "CustomExport" AS
    import javax.sql.Connection;
    import oracle.jdbc.OracleDriver;
    import java.sql.DriverManager;
    import java.sql.SQLException;
    public class CustomExport
    public void do() throws SQLException{
    OracleDriver oracle = new OracleDriver();
    DriverManager.registerDriver(oracle);
    Connection conn = oracle.defaultConnection();
    PreparedStatement st = conn.prepareStatement("select from table where col=?");
    st.setString(1,"value");
    ResultSet rs = st.execute();
    and is there a better way to read and parse an xml document with PL/SQL.what i've read about is the ability to parse an XML file to load its data directly into a database table.What i was looking for was just a way to parse XML without having to load any data into tables so i did it with java.
    CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "CustomParser" AS
    import javax.xml.prasers.SAXParser;
    import javax.xml.parsers.SAXParserFactory;
    import org.xml.sax.Attributes;
    import org.xml.sax.InputSource;
    import org.xml.sax.SAXException;
    import org.xml.sax.helpers.DefaultHandler;
    public class CustomParser
    private static CustomParseHandler handler = new CustomParseHandler();
    public static void parseXMLFile(String fileName) throws FileNotFoundException,IOException,SAXException
    SAXParserFactory saxFactory = SAXParserFactory.newInstance();
    SAXParser parser=saxFactory.newSAXParser();
    parser.parse(new FileInputStream(fileName)),handler);
    private class CustomParseHandler extends DefaultHandler{
    StringBuffer buff;
    public CustomParseHandler()
    this.buff = new StringBuffer();
    public void startElement (String uri, String localName,
    String qName, Attributes attributes)
    throws SAXException
    buff.append("<").append(qName).append(">");
    public void endElement (String uri, String localName, String qName)
    throws SAXException
    buff.append("</").append(qName).append(">").append(newLine);
    public void characters (char ch[], int start, int length)
    throws SAXException
    String fieldName = new String(ch,start,length);
    if(fieldName==null || "".equals(fieldName.trim()))
    return;
    public void clearBuffer(){
    buff.delete(0, buff.length());
    public String getXMLString(){
    return buff.toString();
    }

    PLSQL does not go through Java to access the database. The actual access to the database is via the same mechanism for both, so in some sense, both perform about the same. However, PLSQL datatypes have the same representation as database datatypes so there is no conversion. Java datatypes have different representations than database datatypes so there is a conversion cost associated with moving data between Java and the database.
    If your processing is simple and you are moving a lot of data, PLSQL is likely the better choice. If your processing is more complex and you are moving less data, Java is likely the better choice. There are other things such as portability you should consider, but the amount of data and complexity of the code are the first considerations.
    Douglas

  • Looking for a better way to write this SQL

    Oracle version 11R2
    OS version (does not matter)
    What I trying to do is write a query that finds Public Synonyms without a target object. I came up with this but I thinking there's a better way.
    Select
      s.owner, s.synonym_name, s.table_name, s.table_owner, s.db_link, InitCap(o.object_type) object_type
    from  
      sys.DBA_SYNONYMS s, sys.DBA_OBJECTS o
    where 
      s.synonym_name is not null
    and   
      s.table_owner = o.owner (+)
    and   
      s.table_name = o.object_name (+)
    and   
      s.owner = 'PUBLIC'
    and
      object_type is null;  object_type is null appears to be the weakness. It seems the check of the target object should be better.
    Feedback, comments, queries welcome.

    I'm not sure exactly what "better" means in this context (faster, easier to read, etc.) but I'd tend to use a NOT EXISTS
    SELECT s.*
      FROM dba_synonyms s
    WHERE owner = 'PUBLIC'
       AND s.db_link IS NULL
       AND NOT EXISTS (
        SELECT 1
          FROM dba_objects o
         WHERE o.owner = s.table_owner
           AND o.object_name = s.table_name )I added the DB_LINK criteria to filter out public synonyms that reference objects in remote databases which obviously don't exist in the local DBA_OBJECTS.
    Justin

  • I am having trouble transferring files from an old MacBook (2007) to a MacBook Air over a wireless network.  The connection was interrupted and the time was over 24 hours.  Is there a better way to do this?  I'm using Migration assistant.

    I am having trouble transferring files from an old MacBook (2007) to a MacBook Air over a wireless network.  The connection was interrupted and the time was over 24 hours.  Is there a better way to do this?  I'm using Migration assistant.  The lack of an ethernet port on MacBook air does not help.

    William ..
    Alternative data transfer methods suggested here > OS X: How to migrate data from another Mac using Mavericks

  • There's got to be a better way to do this (RAM preview frustration)

    I loaded a 1:20 second Full HD clip into after effects. I need to edit the video based on certain sounds in the video and see if I'm matching them up correctly by previewing it with sound.
    The problem is i'm getting frustrated due to After effects not behaving like Premiere. First who thought it was a good idea not to incorporate sound into after effects? Second, I have an i7 sandy bridge processor, and 16 gbs of ram, yet it still takes time to render the ram preview (with no effects on it yet).
    So ram preview is my only option for sound, but the problem is every time I hit ram preview it starts the video all the way from the beginning. This is frustrating as I want to start at a specific point. Imagine having a longer video where the editing needs to take place at the end.
    There are people out there doing a lot more complicated professional projects, what do you guys do to get around this?
    Why can't after effects do some basic things like premiere like render fast with sound? Is it due to Mercury engine and 64 bits?
    This is one of the best products on the market, surely there is a better way to do this right?

    but the problem is every time I hit ram preview it starts the video all the way from the beginning.
    Window --> Preview, enable the "From current time" option
    yet it still takes time to render the ram preview (with no effects on it yet).
    There is no magic button. If it is compressed, naturally it needs to be decompressed and decoded first. This can consume resources even on fast machines. Furthermore, drive speed matters a lot in such cases. This might actualyl multiply, if you use multiprocessing, so for this kind of simple setup it's usually better to not use it. If your harddrives are fragmented or simply generally slow, frames cannot be loaded as fast and neither will AE be able to use the disk cache. Ergo, convert the footage and move it to the fastest drive in your system.
    what do you guys do to get around this?
    We preview at reduced compo resolution to extend RAM previews and place markers while the RAM preview plays using the * key on the numpad.
    Mylenium

  • Better Way To Do This? Selector Operator...

    I'm currently writing the selection operator for the algorithm. The aim of it is
    to rate how the coursework block have been allocated and give there allocation
    a rating...
    How I have done it is have a method that searches through the one of the parent
    timetables. It looks for coursework time blocks. Once it finds one it notes
    this and looks at the next block along. If this is a coursework time block it
    notes this as well. I then perform an operation comparing these two coursework
    time blocks to find out if they are for the same Module. If they are not this
    is not a very effective coursework timetable strategy.
    Because of this I note in an array the the position of these two coursework time
    block and give them a fitness rating of 1000. I then go on to see if the next
    block is a coursework time block. If it is and its not of the same Module ID of
    the two previous then I not these 3 block down in an array and give them a
    fitness rating of 2000.
    My concern is that I am using allot of if and for loop's and the code is
    starting to look untidy at best. Is there any better way of doing this?
    Below Is my code:
          * @param parentOne
          * @param parentTwo
         public void selectionOperator(ArrayList parentOne, ArrayList parentTwo){
              // Store's the fitness rating of sections of the timetable...          
              ArrayList parentOneBlockFitnessRating = new ArrayList ();
              ArrayList parentTwoBlockFitnessRating = new ArrayList ();
              //Loops through the timetable's timeblocks...
              for (int i = 0; i < parentOne.size(); i++) {
                   //Checks to see if the current time block is of the class: CourseworkTimeBlock,
                   //if so it enters this statement...
                   if(parentOne.get(i).getClass().toString().equals("class Timetable.CourseworkTimeBlock")){
                        //A temp store for the current CourseworkTimeBlock...
                        CourseworkTimeBlock tempBlockOne = (CourseworkTimeBlock) parentOne.get(i);
                        System.out.println("Got Here!, Module ID...: " + tempBlockOne.getModuleId());
                        //Checks to see if the next time block along is of the class: CourseworkTimeBlock,
                        //if so it enters this statement...
                        if(parentOne.get(i+1).getClass().toString().equals("class Timetable.CourseworkTimeBlock")){
                             //A temp store for the next CourseworkTimeBlock...
                             CourseworkTimeBlock tempBlockTwo = (CourseworkTimeBlock) parentOne.get(i+1);
                             System.out.println("Got Here Aswell!");
                             //Checks to see if the current and next CourseworkTimeBlock module Id's
                             //are the same, if there arn't then this section is entered...
                             if(!tempBlockOne.getModuleId().equals(tempBlockTwo.getModuleId())){
                                  //Checks to see if the second time block along is of the class: CourseworkTimeBlock,
                                  //if so it enters this statement...
                                  if(parentTwo.get(i+2).getClass().toString().equals("class Timetable.CourseworkTimeBlock")){
                                       //A temp store for the second CourseworkTimeBlock along...
                                       CourseworkTimeBlock tempBlockThree = (CourseworkTimeBlock) parentTwo.get(i+2);
                                       //Checks to see if all 3 of the Module Id's of the CourseworkTimeBlocks match,
                                       //if they don't match this statement is entered...
                                       if(! tempBlockOne.getModuleId().equals(tempBlockTwo.getModuleId()) && (tempBlockTwo.getModuleId().equals(tempBlockThree.getModuleId()))) {
                                            //ArrayList to store the fitness rating of the current block
                                            //selection...
                                            ArrayList <Integer> blockFitness = new ArrayList<Integer>();
                                            //Position of first block.
                                            blockFitness.add(i);
                                            //Position of second block.
                                            blockFitness.add(i+1);
                                            //Position of second block.
                                            blockFitness.add(i+2);
                                            //Fitness Value
                                            blockFitness.add(2000);
                                            //Add block rating to main rating ArrayList...
                                            parentOneBlockFitnessRating.add(blockFitness);
                                       else{
                                            //ArrayList to store the fitness rating of the current block
                                            //selection...
                                            ArrayList <Integer> blockFitness = new ArrayList<Integer>();
                                            //Position of first block.
                                            blockFitness.add(i);
                                            //Position of second block.
                                            blockFitness.add(i+1);
                                            //Fitness Value
                                            blockFitness.add(1000);
                                            //Add block rating to main rating ArrayList...
                                            parentOneBlockFitnessRating.add(blockFitness);
              for (int o = 0; o < parentTwo.size(); o++) {
                   if(parentTwo.get(o).getClass().toString().equals("class Timetable.CourseworkTimeBlock")){
                        CourseworkTimeBlock tempBlockOne = (CourseworkTimeBlock) parentTwo.get(o);
                        System.out.println("Got Here!, Module ID...: " + tempBlockOne.getModuleId());
                        if(parentTwo.get(o+1).getClass().toString().equals("class Timetable.CourseworkTimeBlock")){
                             CourseworkTimeBlock tempBlockTwo = (CourseworkTimeBlock) parentTwo.get(o+1);
                             System.out.println("Got Here Aswell!");
                             if(!tempBlockOne.getModuleId().equals(tempBlockTwo.getModuleId())){
                                  if(parentTwo.get(o+2).getClass().toString().equals("class Timetable.CourseworkTimeBlock")){
                                       CourseworkTimeBlock tempBlockThree = (CourseworkTimeBlock) parentTwo.get(o+2);
                                       if(! tempBlockOne.getModuleId().equals(tempBlockTwo.getModuleId()) && (tempBlockTwo.getModuleId().equals(tempBlockThree.getModuleId()))) {
                                            ArrayList <Integer> blockFitness = new ArrayList<Integer>();
                                            //Position of first block.
                                            blockFitness.add(o);
                                            //Position of second block.
                                            blockFitness.add(o+1);
                                            //Position of second block.
                                            blockFitness.add(o+2);
                                            //Fitness Value
                                            blockFitness.add(2000);
                                            //Add block rating to main rating ArrayList...
                                            parentTwoBlockFitnessRating.add(blockFitness);
                                       else{
                                            ArrayList <Integer> blockFitness = new ArrayList<Integer>();
                                            //Position of first block.
                                            blockFitness.add(o);
                                            //Position of second block.
                                            blockFitness.add(o+1);
                                            //Fitness Value
                                            blockFitness.add(1000);
                                            //Add block rating to main rating ArrayList...
                                            parentTwoBlockFitnessRating.add(blockFitness);
         }As you can see there are allot if statements and some bad coding practice to boot. But I don't know what other ways to do it....
    Any directions of other ways how to do this?
    Many Thanks
    Chris

    Unfortunately, I think you're stuck with a bunch of if-statements.
    Fortunately, I have some things that may help you.
    First, I usually make sure something is of x class via this
    if (someObject instanceof SomeClass) {So, I'd adjust your 'class checking' conditionals from this
    if(parentOne.get(i).getClass().toString().equals("class Timetable.CourseworkTimeBlock"))to this
    if (parentOne.get(i) instanceof Timetable.CourseworkTimeBlock) {Secondly, your code logic is kind of confusing.
    Why do you have a for-loop that iterates through every CourseworkTimeBlock, if you then (within each possible iteration) check iteration+1 and iteration+2? What happens if those throw an ArrayIndexOutOfBoundsException, or are null?

Maybe you are looking for

  • Assigned t-code are not reflecting to the user

    Hi, I have created a parent role and two derived role from parent.I have assigned these two derived role to the user.These role contains many t-codes.But when i login with the user id one t-code is not executing.Its giving an error that you are not a

  • Extended Withholding Tax output

    Hi, We have maintained same Tax codes for Vendor 01 and Vendor 02. But for Vendor 02 i m getting the wrong output value.

  • "List" type for attribute in Workflow

    Hi, I need to set the value of an attribute in List type whose length is determined in the run-time, but it's not a built-in type in workflow. Any solution on this? (Oracle 9i/workflow 2.6) Thanks Fiona

  • Display language- Arabic support

    I've updated my 8900 Curve's software, however, Arabic language is not supported within, is there any problem in the installation, or is the language not available yet? Thanks,

  • Photoshop Lightroom 4.3 Watermark Error

    Hi, i happen to have an issue deleting a watermark preset. Here's what happened. I created a watermark and deleted the source file of the watermark. Sad thing is, i forgot the filename and i deleted the items in my recyclebin, giving me the disabilit