Optimal length in FM "RKD_WORD_WRAP"

Hi all,
Regarding the usage of FM "RKD_WORD_WRAP",
when wrapping a sentence with pure English characters, it is easy to decide the max. line length (OUTPUTLEN), but what if a sentence that is a mixture of Chinese characters and English characters (in which the distribution is not fixed every time in a sentence)? How to determine the optimal line length so that no overlapping would occur after calling the FM?
Thanks a lot!

OK.
This is some part of the code in the FM that ur using:
CONSTANTS:
          MAX_OUTPUTLEN       TYPE I         VALUE 256.   "WP210396/30D
  DATA:   IPOS                LIKE SY-FDPOS,          "cut position
          LNUM                TYPE I,                 "line number
          STRING_LENGTH       LIKE SY-FDPOS,          "remaining length
          OUTLINE(MAX_OUTPUTLEN).                     "tmp: textline
  DATA:  TEXTLEN              TYPE I.      " len of textline    H48596
  data: begin of lt_lines occurs 0,                          "H799865
          line(256) type c,                                  "H799865
        end of lt_lines.                                     "H799865
  FIELD-SYMBOLS: <F>   type C,         "looks for DELIMITER
                 <OUT> type C.         "outline reference
  IF OUTPUTLEN > MAX_OUTPUTLEN.
    RAISE OUTPUTLEN_TOO_LARGE.
  ENDIF.
  DESCRIBE FIELD TEXTLINE LENGTH TEXTLEN in character mode.
  IF TEXTLEN LT OUTPUTLEN.
    OUTPUTLEN = TEXTLEN.
  ENDIF.
  IPOS = OUTPUTLEN.
  LNUM = 1.
  CLEAR OUT_LINES.
  STRING_LENGTH = STRLEN( TEXTLINE ).                     "H799865
  if delimiter eq space or string_length gt outputlen.      "H799865
  DO.
    CASE LNUM.
      WHEN 1.
        ASSIGN OUT_LINE1 TO <OUT>.
      WHEN 2.
        ASSIGN OUT_LINE2 TO <OUT>.
      WHEN 3.
        ASSIGN OUT_LINE3 TO <OUT>.
      WHEN OTHERS.
        ASSIGN OUTLINE TO <OUT>.
    ENDCASE.
    IPOS = IPOS - 1.
    STRING_LENGTH = STRLEN( TEXTLINE ).
If u see the max. outputlen is given 256,if it is more than that then it is going to raise exception.. I think now u might have got an idea about the optimal length.
Regards,
Vishwa.

Similar Messages

  • Best procedures to record a long presentation

    Hello everyone,
    Glad to know about this forum.
    I've been converting PPT slides into captivate3 and recording
    the narration. At first, we recorded about 20 minutes of
    conversation for 15 slides. The sound file was easy to manipulate
    (remove hums and bits of speech ;-)), the captivate 3 file was over
    120'000 KB but no crash, t'was fine. Excellent sound quality but
    the image quality was not so good and the text a bit deformed.
    Following advice on this forum, I recorded onscreen the PPT show.
    result = Very good image quality.
    Then, we decided to introduce 1-2 bullet points of text at a
    time. From what I could figure out, each bullet point had to become
    a slide in captivate in order to be able to modify its timeline.
    Then, also thanks to you all, found Audio>Edit
    timing>projects. Took the sound file of hte whole presentation
    and have been adding the slide marks. I've had to redo and redo
    this several times, and it is not over yet, because, despite the
    fact I try to save every 2-3 slides, all the sudden, instead of
    it's being slide 15 at 9 minutes of track, it is slide 4 at 9
    minutes and all other slide markers previously added disappear. So,
    here I am back to the slide 4 "mark" although I am at slide
    19...and 20 more to go, an 17 earlier ones to redo....
    Here are questions:
    Has anyone had this problem in setting up the timing?
    Is there an optimal length of sound file to work with
    captivate 3 (for ex., don' t go over 5-10-15-30 minutes...)
    Is there a way to capture bullet point animation on one
    slide?
    How would you go about with the project?
    I thought of recording in PPT and then link audio in
    Captivate, but the sound quality really is v. good in captivate 3.
    my temp solution was to save each time and if the above error
    occured, to close the file without saving and return to the last
    save. Even that doesn't work!!!!
    Many thanks!!

    Papillon
    sorry, only glanced thro the post so missed the point
    regarding timing appearing to have a life of its own and have had
    lots of frustrations with that one.
    If I have it right you record a project, add narration,
    publish and everything is OK. But, once you amend the project the
    timing goes all over the place no matter what you do, even multiple
    saving doesn't seem to work..... sound about right???
    After much gnashing of teeth I think I got to the bottom of
    it. If you make amendments to a project that has narration I
    suggest that, before even listening to the narration and then
    editing audio, you go back to the project, slide by slide and check
    for timeline changes, because changes to a slide can change its
    duration which will thereby create a large silence on the project
    narration. If you don't "touch up" the physical timing of a slide
    any edit to the audio will have no effect - and a silence is
    created.
    But by far the easiest method I have found is that if you
    need to add/delete slides once narration is attached take a copy of
    the project, delete the narration from the copy, paste the original
    narration onto the copy and use the edit timing>projects to
    distribute thro the new project - this is much less stressful. This
    is where recording in 5 minute chunks really comes into its own
    Again, hope this helps
    Robbie

  • Time Code Effect  Source Monitor and rendered .AVI movie display different timings???

    Im seeing a strange error and hope someone might be able to help me out. I am shooting elementary classroom reading lessons for a study by the University of Michigans School of Education. It is important to the study to see what takes place over about a 2hour period of time.
    Im shooting with 2 cameras. 1 on the teacher the other on the students. Im shooting on mini DV. The first camera I start about 4min ahead of the second so that when the 1hour tape is filled I can switch the tape in camera 1 while still capturing activity on camera 2.
    When I edit the footage I am cutting back and forth between the 2 cameras. One initial shot is of the classroom clock as it ticks from one minute to the next. I later use this footage as a guide when establishing the Time of Day.
    When the entire lesson is edited together and cut. I add two transparent video tracks each with the Timecode effect added. The first I synchronize with the classroom clocksetting the Timecode Source to Generate matching the Starting Timecode with the clock footage. I then crop the frames portion so viewers will know the time of day the footage was shot. The other I leave the Timecode Source set as Clip. I use the size and position controls to move the two timers into the upper left of the screen.
    When I view the finished video in the Program Monitor everything looks great. When I Export|Movie to an .AVI file the video looks correct but the timers are off. The last frame of the video in the Source Monitor displays 11:28:19 for the clock timer and 02;02;00;23 for the total time timer. But when I view the .AVI the last frame displays 11;03;40 for the clock and 01;37;36;25 for the total time.
    So how did I lose 24min between the Source Monitor displays and the rendered .AVI displays?

    Yes. After I had the .AVI I ran through the two side-by-side (.avi and the Source Monitor). I have since experimented with some of the Time Code effect settings trying to see if these might have some impact. I have not been able to find any detailed explanation of what each setting controls/does other than an overview (see link below).
    I suspect that this has something to do with the size/length of the clip 2+ hours. Is there an optimal length for an individual clip? Should I be splitting this into 2 or 3 shorter sequences and then putting them on a DVD with Encore and setting it up to play them back-to-back?
    http://livedocs.adobe.com/en_US/PremierePro/3.0/help.html?content=WS4AE872FF-BC0F-45e8-AD8 0-C2784D17CDFE.html

  • How will u know that a dimension can b a lineitem Dim b4  loading into Cube

    Hi, Im new to this. My qury is how will u know that a dim will b a LineItem dimension bfore loading in to cube , are there anystandard to b followed. Ca u give me some realtime detailed scenarious.

    Hi,
         When compared to a fact table, dimensions ideally have a small cardinality. However, there is an exception to this rule. For example, there are InfoCubes in which a characteristic Document is used, in which case almost every entry in the fact table is assigned to a different Document. This means that the dimension (or the associated dimension table) has almost as many entries as the fact table itself. We refer here to a degenerated dimension.
    Generally, relational and multi-dimensional database systems have problems to efficiently process such dimensions. You can use the indicators line item and high cardinality to execute the following optimizations:
          1.      Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
           When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.
          A table- having a very large cardinality- is removed from the star schema. as a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
    Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.
    We recommend that you use DataStore objects, where possible, instead of InfoCubes for line items. See Creating DataStore Objects.
           2.      High cardinality: This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
    Activities
    When creating dimensions in the InfoCube maintenance, flag the relevant dimension as a Line Item/ having High Cardinality.
    Define lots of small dimensions rather than a few large dimensions.
         The size of the dimension tables should account for less than 10% of the fact table.
         If the size of the dimension table amounts to more than 10% of the fact table, mark the dimension as a line item dimension.
    To attain good performance for a query on non-cumulative InfoCubes, you should take note of the following:
    Compression:
    Compress all requests in the non-cumulative InfoCube, or at least most of them.
    The performance of a query based on a non-cumulative InfoCube depends heavily on how the InfoCube is compressed. If you want to improve the performance of a query of this type, first check – in so far as this is possible - whether the data in the InfoCube should be compressed. You should always compress data when you are sure that the request affected will not need to be deleted from the InfoCube.
    Validity Table
    Use as few validity-determining characteristics as possible.
    The number and cardinality of the validity-determining characteristics heavily influences performance. Therefore, you should only define characteristics as validity-determining characteristics when it is really necessary.
    Time Restrictions in the Query
    As far as possible, restrict queries based on non-cumulative InfoCubes to time characteristics.
    The stricter the time-based restriction, the faster the query is generally executed, as the non-cumulative is reconstructed if the number of times is smaller.
    Time Drilldown in the Query
    If you no longer need the average, split a query on a non-cumulative InfoCube (which contains both key figures with LAST aggregation and key figures with AVERAGE aggregation) into two queries.
    With non-cumulative key figures with the exception aggregation LAST, the time characteristic included in the drilldown makes a difference to performance. If, for example, both Calendar Day and Calendar Month are included in the InfoCube, drilldown by month is faster than drilldown by day, because the number of times for which a non-cumulative has to be calculated is smaller.
    For the other types of exception aggregation (average, average weighted with factory calendar, minimum and maximum), this rule is not valid as in these cases, the data is always calculated on the level of the most detailed time characteristic first before exception aggregation is performed.
    Totals Rows
    Hide the totals row in the query when not required.
    Depending on the type of aggregation being used, the calculation of totals rows can be very time-consuming.
    When selecting MDC dimensions, proceed as follows:
            Select dimensions for which you often use restrictions in queries.
            Select dimensions with a low cardinality.
    The MDC dimension is created in the column with the dimension keys (DIMID). The number of different combinations in the dimension characteristics determines the cardinality. Therefore, select a dimension with either one, or few characteristics and with only a few different characteristic values.
    Line item dimensions are not usually suitable, as they normally have a characteristic with a high cardinality.
    If you specifically want to create an MDC dimension for a characteristic with a low cardinality, you can define this characteristic as a line item dimension in the InfoCube. This differs from the norm that line item dimensions contain characteristics with a very high cardinality. However, this has the advantage for multidimensional clustering that the fact table contains the SID values of the characteristic, in place of the dimension keys, and the database query can be restricted to these SID values.
            You cannot select more than three dimensions, including the time dimension.
            Assign sequence numbers, using the following criteria:
            Sort the dimensions according to how often they occur in queries (assign the lowest sequence number to the InfoObject that occurs most often in queries).
            Sort the dimensions according to selectivity (assign the lowest sequence number to the dimension with the most different data records).
    Note: At least one block is created for each value combination in the MDC dimension. This memory area is reserved independently of the number of data records that have the same value combination in the MDC dimension. If there is not a sufficient number of data records with the same value combinations to completely fill a block, the free memory remains unused. This is so that data records with a different value combination in the MDC dimension cannot be written to the block.
    If for each combination that exists in the InfoCube, only a few data records exist in the selected MDC dimension, most blocks have unused free memory. This means that the fact tables use an unnecessarily large amount of memory space. Performance of table queries also deteriorates, as many pages with not much information must be read.
    Example
    The size of a block depends on the PAGESIZE and the EXTENTSIZE of the tablespace. The standard PAGESIZE of the fact-table tablespace with the assigned data class DFACT is 16K. Up to Release SAP BW 3.5, the default EXTENTSIZE value was 16. As of Release SAP NetWeaver 2004s the new default EXTENTSIZE value is 2.
    With an EXTENTSIZE of 2 and a PAGESIZE of 16K the memory area is calculated as 2 x 16K = 32K, this is reserved for each block.
    The width of a data record depends on the number of dimensions and the number of key figures in the InfoCube. A dimension key field uses 4 bytes and a decimal key figure uses 9 bytes. If, for example an InfoCube has 3 standard dimensions, 7 customer dimensions and 30 decimal key figures, a data record needs 10 x 4 bytes + 30 x 9 bytes = 310 bytes. In a 32K block, 32768 bytes / 310 bytes could write 105 data records.
    If the time characteristic calendar month (0CALMONTH) and a customer dimension are selected as the MDC dimension for this InfoCube, at least 100 data records should exist for each InfoPackage, for each calendar month and for each dimension key of the customer dimension. This allows optimal use of the memory space in the F fact table. In the E fact table, this is valid for each calendar month and each dimension key of the customer dimension,dimension contains a characteristic whose value already uniquely determines the values of all other characteristics from a business-orientated viewpoint, then the dimension is named after this characteristic.
      The customer dimension could, for example, be made up of the customer number, the customer group and the levels of the customer hierarchy.
    The sales dimension could contain the characteristics ‘sales person’, ‘sales group’ and ‘sales office’.                                            
    The time dimension could be given using the characteristics ‘day’ (in the form YYYYMMDD), ‘week’ (in the form YYYY.WW), ‘month’ (in the form YYYY.MM), ‘year’ (in the form YYYY) and ‘period’ (in the form YYYY.PPP).
    Use
    When defining an InfoCube, characteristics for dimensions are grouped together to enable them to be stored in a star schema table (dimension table). The aforementioned business-orientated grouping can be the basis for this. With the aid of a simple foreign key dependency, dimensions are linked to one of the key fields of the fact table.
    When you create an InfoCube, the dimensions data package, time and unit are already defined by default. The data package dimension contains technical characteristics. Time characteristics and units are automatically assigned to the corresponding dimensions. When you activate the InfoCube, only those dimensions that contain InfoObjects are activated.
    From a technical viewpoint several characteristic values are mapped to an abstract dimension key (DIM ID), to which the values in the fact table refer. The characteristics chosen for an InfoCube are divided up among InfoCube-specific dimensions when creating the InfoCube.
    Also refer to the following for specific cases arising when defining dimensions:
    Line Item and High Cardinality
    The methods for setting and getting data from a named range use the separation between the description of the range and the data itself. Note that the sequence must be observed both in the range description (structure soi_range_list ) and in the data (structure soi_generic_table ). This means that you must list all data from the first range before you can insert data into the second range.
    Structure soi_range_list
    Field
    Type
    Description
    name
    C
    Name of the range
    rows
    C
    Number of rows
    columns
    C
    Number of columns
    code
    C
    Function in the range:
    SPREADSHEET->SPREADSHEET_CLEAR : Deletes range
    SPREADSHEET->SPREADSHEET_COLUMNSHIDE : Hides columns
    SPREADSHEET->SPREADSHEET_ROWSHIDE : Hides rows
    SPREADSHEET->SPREADSHEET_PROTECT : Range is protected
    SPREADSHEET->SPREADSHEET_UNPROTECT : Range is not protected
    SPREADSHEET->SPREADSHEET_COLUMNSSHOW : Columns are displayed.
    SPREADSHEET->SPREADSHEET_ROWSSHOW : Rows are displayed.
    SPREADSHEET->SPREADSHEET_INSERTALL : The entire table is inserted, regardless of the size of the area
    SPREADSHEET->SPREADSHEET_NEWRANGE : Creates a new range
    The name identifies the range in the worksheet. This is, in effect, the key with which you always access the range. The size of the range is always given in columns and rows.
    Some functions allow you to access a specific area in a worksheet. You can see from the table which functions are implemented.
    Description of Data Type soi_generic_table
    In this table, you can save data from the range and use the  Data Provider to transfer it to or retrieve it from the frontend. The data is transferred directly as a string with no type information.
    Structure soi_generic_table
    Field
    Type
    Description
    row
    C(4)
    Row
    column
    C(4)
    Column
    value
    C(256)
    Value
    The sequence of the data must correspond to the sequence of the range description, for example, range1 before range2 . The data table must then contain the data for the ranges in the sequence range1 range2 .
    Description of Data Type soi_format_table
    Use this table to specify the format of a range. The format consists of various attributes, all of which can be set in a single line. Each variable attribute corresponds to a column of the structure.
    To create a work area for this table, use the structure soi_format_item as a reference.
    The entry "-1" always indicates that the existing attribute value for the range should not be changed.
    Structure soi_format_table
    Field
    Type
    Description
    name
    C(256)
    Name of the range
    front
    I
    Font color (see color palette)
    back
    I
    Background color (see color palette)
    font
    C(256)
    Name of the font family. The following values are permitted:
    'Arial'
    'Courier New'
    'Times New Roman'
    size
    I
    Font size
    '-1' : Unchanged
    bold
    I
    '1' : Bold
    '0' : Normal
    '-1' : Unchanged
    italic
    I
    '1' : Italic
    '0' : Normal
    '-1' : Unchanged
    align
    I
    Alignment:
    '-1' : Unchanged
    '0' : Right-justified
    '1' : Centered
    '2' : Left-justified
    frametype
    I
    Control byte for setting the frame
    '-1' : Unchanged
    framecolor
    I
    Frame color (see color palette)
    '-1' : Unchanged
    currency
    C(3)
    ISO standard currency code
    number
    I
    Specifies the format of a cell in a range.
    1: Display as a simple number
    2: Scientific display
    3: Display as a percentage
    The control byte type contains the following bits. If a bit is set, its corresponding line is drawn. You can set the thickness of the line to one of four levels using bits 6 and 7.
    Bit
    Description
    0
    Sets the left margin
    1
    Sets the top margin
    2
    Sets the bottom margin
    3
    Sets the right margin
    4
    Horizontal line
    5
    Sets the left margin
    6
    Thickness
    7
    Thickness
    Description of Data Type soi_full_range_table
    Each line of a table with the type soi_full_range_table specifies the full definition of a range. The individual lines have the data type soi_full_range_item .
    Structure soi_full_range_table
    Field
    Type
    Description
    name
    C(128)
    Name of the range
    top
    I
    Top row of the range
    left
    I
    Leftmost column of the range
    rows
    I
    Number of rows in the range
    columns
    I
    Number of columns in the range
    sheets
    C(128)
    Worksheet on which the range is defined
    Description of Data Type soi_cell_table
    Each line of a table with the type soi_cell_table specifies the attributes of a range of cells. However, no range name is used. Instead, the cell area is defined by its starting position and the number of rows and columns it contains.The individual lines have the data type soi_cell_item .
    Structure soi_cell_table
    Field
    Type
    Description
    top
    I
    Top row of the range
    left
    I
    Leftmost column of the range
    rows
    I
    Number of rows in the range
    columns
    I
    Number of columns in the range
    front
    I
    Font color (see color palette)
    back
    I
    Background color (see color palette)
    font
    C(256)
    Font. The following are permitted:
    Arial
    Courier New
    Times Roman
    size
    I
    Font size
    Use -1 if the font size is to remain unchanged.
    bold
    I
    '1' : Bold
    '0' : Normal
    '-1' : Unchanged
    italic
    I
    '1' : Italic
    '0' : Normal
    '-1' : Unchanged
    align
    I
    Alignment:
    '-1' : Unchanged
    '0' : Right-justified
    '1' : Centered
    '2' : Left-justified
    frametype
    I
    Control byte for setting the frame
    '-1' : Unchanged
    framecolor
    I
    Frame color (see color palette)
    '-1' : Unchanged
    currency
    C(3)
    ISO standard currency code
    number
    I
    Specifies the format of a cell in a range.
    1: Display as a simple number
    2: Scientific display
    3: Display as a percentage
    decimals
    I
    Number of decimal places
    input
    I
    '0' : Input off
    '1' : Input on
    Description of Data Type soi_dimension_table
    You can use an internal table with this type to identify a range by specifying the coordinates of its top left-hand corner, its length, and its width. The lines of soi_dimension_table have the line type soi_dimension_item .
    Structure soi_dimension_item
    Field
    Type
    Decription
    top
    I
    Topmost row of the range
    left
    I
    Leftmost column of the range
    rows
    I
    Number of rows
    columns
    I
    Number of columns
    Term
    Definition
    Board
    A tabbed area in the workspace used to manipulate the model and its elements: Design board, Layout board and Source board.
    Characteristic
    A type of InfoObject in SAP BI systems that provides a classification such as company code, product, customer group, fiscal year, period, or region. Related to the OLAP-standard term dimension.
    Component
    A reusable model element, such as a UI component or a data service.
    Cube
    A set of data organized as a multidimensional structure defined according to dimensions and measures.
    Related SAP BI terms include InfoCube and query.
    Data binding
    A connection between two UI components (or between a web service and a UI component) that channels identical data from the output port of one UI component to the input port of the other UI component.
    Data flow
    The means by which data is channeled between a data service and connected UI components, or between two UI components whose connection was changed from Data binding to Data flow.
    Data mapping
    Connection between two model elements, describing, for example, the data that is input to an element or the fields that are output from another element.
    Data service
    Any function call, business object or query imported into the model. At runtime, the data service is called and returns results.
    Data store
    A central data container where data of a model can be temporarily stored for future use.
    Dimension
    In OLAP-standard systems:
    A collection of similar data which, together with other such collections, forms the structure of a cube. Typical dimensions include time, product, and geography. Each dimension may be organized into a basic parent-child hierarchy or, if supported by the data source, a hierarchy of levels.  For example, a geography dimension might include levels for continent, country, state, and city.
    The related term in SAP BI systems is characteristic.
    In SAP BI systems:
    A grouping of those evaluation groups (characteristics) that belong together under a common superordinate term.
    With the definition of an InfoCube, characteristics are grouped together into dimensions in order to store them in a star schema table (dimension table).
    Element
    A general term indicating any item used to create a model, including: components, connectors and operators.
    Enterprise service
    A Web service defined to perform functions of an SAP system. Web services are published to and stored within a repository.
    Field
    An element of a table that contains a single piece of data. Fields are organized into rows, which contain all the data relevant for one specific entry in the table.  In some databases, field is a synonym for column.
    Filter
    A set of criteria that restricts the set of records returned as the result of a query. With filters, you define which subset of data appears in the result set.
    Hierarchy
    A logical tree structure that organizes the members of a dimension into a parent-child relationship. If supported by the data source, the hierarchy consists of levels, where the top level is an aggregate of all members and each subsequent level has zero or more child members.
    InfoArea
    An element for grouping meta-objects in the Business Information Warehouse. Each InfoProvider is assigned to an InfoArea. The resulting hierarchy is displayed in the Administrator Workbench.
    InfoCube
    An SAP BI system that consists of a quantity of relational tables created according to the star schema: a large fact table in the center, with several dimension tables surrounding it. It provides a self-contained dataset which can be used for analysis and reporting.
    Similar to the OLAP-standard term cube.
    InfoObject
    A business evaluation object (for example, customer or quantity) in SAP BI systems. Types of InfoObjects include characteristics, key figures, units, time characteristics, and technical characteristics (such as request numbers).
    JDBC
    Java Database Connectivity, which provides an API that lets you access relational databases using the Java programming language. This enables connectivity to a wide range of SQL databases, and also provides access to tabular data sources such as spreadsheets or flat files. The BI JDBC Connector accesses data from JDBC-compliant systems.
    Join
    A relationship between two tables that produces a result set that combines their contents. You create a join by indicating how selected fields in one table are related to selected fields in the other table.
    Key figure
    A value or quantity in SAP BI systems. Related to the OLAP-standard term measure. You may also define calculated key figures, which are derived using a formula.
    Layer
    A collection of UI elements that are all visible at the same time at runtime.
    Level
    A set of nodes (members) in a tree hierarchy in supporting data sources that are at the same distance from the root of the tree. For example, in a geography hierarchy, the top level might be all places, the second level might be continents, the third level might be countries, and the fourth level might be cities.
    MDX
    Multidimensional Expressions, a query language used to retrieve and manipulate multidimensional data.
    Measure
    One category of values – usually numeric – used to define a cube. These values are derived from one or more columns in the cube's fact table and are the basis for aggregation and analysis.
    Related SAP BI terms include key figure and structure element.
    Member
    An element of a dimension that represents one or more occurrences of data. A member can be unique (it occurs only once) or non-unique (it may occur more than once in its dimension).  For example, in a geography dimension that includes cities in the US, the member Portland could be non-unique, since there is a city called Portland in the state of Oregon and in the state of Maine.
    In SAP BI systems, members are referred to as instances of characteristics.
    Model
    An object created in Storyboard. Models may contain packages, pages, iViews and any other model elements.
    Multidimensional data
    Data in dimensional models suitable for business analytics. In this documentation, we use the term multidimensional data synonymously with OLAP data.
    Navigation line
    A connection that provides event annotation, running between model layers. The source element raises the event that can be handled by the connected element. By default, a navigation line is curved.
    ODBO
    OLE DB for OLAP – Microsoft’s set of objects and interfaces that extend the ability of OLE DB to provide access to multidimensional data sources on the Windows platform. Providers of OLAP data can implement the interfaces described with OLE DB for OLAP to allow all OLAP clients to access their data. The BI ODBO Connector accesses data from ODBO-compliant systems.
    OLAP
    Online analytical processing – a system of organizing data in a multidimensional model that is suitable for decision support. SAP BI systems are OLAP systems.
    Operation
    A functionality provided by a Web service.
    Operator
    A mechanism used to manipulate data returned from the data service before it is displayed in the iView.
    Package
    A high-level “container”; it can contain any number of pages, iViews or other packages.
    Port
    A defined point of interface into and out of a component.
    Query
    In SAP BI systems, a collection of selected characteristics and key figures (InfoObjects) used together to analyze the data of an InfoProvider. A query always refers exactly to one InfoProvider, whereas you can define as many queries as you like for each InfoProvider.
    Query view
    In SAP BI systems, a view of a query after navigation, saved in an InfoCube. You can use this saved query view as a basis for data analysis and reporting.
    Relational database
    A repository for typically large amounts of information, structured in accordance with the relational model, in tables with columns. A relational database is created and administered by a relational database management system (RDBMS).
    Row
    A set of fields within a table that contains the data for one specific entry in the table. Each row in a given table has the same structure, predefined for a particular table. In some databases, row is a synonym for record.
    SAP Query
    A component that allows you to create custom reports without any ABAP programming knowledge. The BI SAP Query Connector uses SAP Query to access data from such SAP operational applications.
    Storyboard
    The Visual Composer client from which you design models.
    Table
    A set of rows, also known as a relation. The table is the central object of the relational model.
    Task panel
    A work area of the Visual Composer Storyboard desktop that displays a specific set of tools for building a model.
    Toolbar
    The horizontal row of buttons under the main menu (main toolbar) or the vertical row of buttons in the task panel (task-panel toolbar).
    Toolbox
    A set of board-specific tools that assist in performing tasks in the Visual Composer workspace.
    Value help
    The offering, typically in a pop-up dialog box, of possible valid values for an input field. Also known as input help, selection help, or F4 help.
    Web service
    An interface between two or more software applications that is implemented with the industry standards SOAP, WSDL and UDDI.
    Workspace
    The main grid area of Visual Composer that displays the model as it is built and modified. The workspace consists of boards.
    XMLA
    XML for Analysis, an XML-messaging-based protocol specified by Microsoft for exchanging analytical data between client applications and servers (for example, OLAP providers) using HTTP and SOAP as a service on the Web. The BI XMLA Connector accesses data from XMLA-compliant systems.
    Clustering allows you to save sorted data records in the fact table of an InfoCube. Data records with the same dimension keys are saved in the same extents (related database storage unit). This means that same data records are not spread across a large memory area and thereby reduces the number of extents that the system has to read when it accesses tables. This greatly accelerates read, write and delete access to the fact table.
    Prerequisites
    Currently the function is only supported by the database platform DB2 for Linux, UNIX, and Windows. You can use partitioning to improve the performance of other databases. For more information, see Partitioning.
    Features
    Two types of clustering are available: Index clustering and multidimensional clustering (MDC).
    Index Clustering
    Index clustering organizes the data records of a fact table according to the sort sequence of an index. Organization is linear and corresponds to the values of the index field.
    If a data record cannot be inserted in accordance with the sort sequence because the relevant extent is already full, the data record is inserted into an empty extent at the end of the table. For this reason, the system cannot guarantee that the sort sequence is always correct, particularly if you perform many insert and delete operations. Reorganizing the table restores the sort sequence and frees up memory space that is no longer required.
    The clustering index of an F fact table is, by default, the secondary index in the time dimension. The clustering index of an E fact table is, by default, the acting primary index (P index).
    As of release SAP BW 2.0, index clustering is standard for all InfoCubes and aggregates.
    Multidimensional Clustering (MDC)
    Multidimensional clustering organizes the data records of a fact table in accordance with one or more fields that you define freely. The selected fields are also marked as MDC dimensions. Only data records that have the same values in the MDC dimensions are saved in an extent. In the context of MDC, an extent is called a block. The system can always guarantee that the sort sequence is correct. Reorganizing the table is not necessary, even with many insert and delete operations.
    Block indexes from within the database, instead of the default secondary indexes, are created for the selected fields. Block indexes link to extents instead of data record numbers and are therefore much smaller. They save memory space and the system can search through them more quickly. This accelerates table requests that are restricted to these fields.
    You can select the key fields of the time dimension or any customer-defined dimensions of an InfoCube as an MDC dimension. You cannot select the key field of the package dimension; it is automatically added to the MDC dimensions in the F fact table.
    You can also select a time characteristic instead of the time dimension. In this case, the fact table has an extra field. This contains the SID values of the time characteristic. Currently only the time characteristics Calendar Month (0CALMONTH) and Fiscal Year/Period (0FISCPER) are supported. The time characteristic must be contained in the InfoCube. If you select the Fiscal Year/Period (0FISCPER) characteristic, a constant must be set for the Fiscal Year Variant (0FISCVARNT) characteristic.
    Clustering is applied to all the aggregates of the InfoCube. If an aggregate does not contain an MDC dimension of the InfoCube, or if all the InfoObjects of an MDC dimension are created as line item dimensions in the aggregate, the aggregates are clustered using the remaining MDC dimensions. Index clustering is used for the aggregate if the aggregate does not contain any MDC dimensions of the InfoCube, or if it only contains MDC dimensions.
    Multidimensional clustering was introduced in Release SAP NetWeaver 2004s and can be set up separately for each InfoCube.
    For procedures, see Definition of Clustering.
    Screen capture input to SAP Business Graphics must adhere to certain format rules in order to be recognized correctly.
    SAP Business Graphics assumes that your screen data resembles the basic SAP table structure. This structure is somewhat flexible, but the table must obey the format rules listed in this section.
    Restrictions on the Format of the Data
    If you use the screen capture facility to input graphics data, the input table can contain either a single list of values, or rows and columns. If the data is a single list, you can include the values themselves and labels for each value. If the data has rows and columns, you can include a label for each row, a label for each column, and the table values themselves.
    You cannot use the screen capture facility to input data in multiple tables. If you want to graph data occurring in multiple tables, you must write the input values to a file using ABAP programming tools. See SAP Graphics: Programming Interfaces for more information.
    Format Rules for Numerical Values
    Numerical values must obey the following rules:
    Within a numerical value, the screen capture recognizes only the minus sign (hyphen), the comma, and the decimal point (period) as legitimate punctuation. Exponential notation and other variations are not recognized.
    Note that the functions of the period and the comma in the English system are exactly opposite to their functions in some European systems. If your numbers are not being interpreted correctly, check with the system administrator to determine how these characters should be used.
    The minus sign must occur after the number, with no intervening spaces.
    All numbers in a row must be separated by spaces.
    A column of numbers is right-justified and identified by the position of its right-most character. Each number belonging to this column must have its right-most character in the correct position.
    If you have values partially or entirely out of alignment with the given right-most character position, they will not be interpreted as belonging to the proper column. In most cases, the screen capture program assumes these are values for an entirely new column.
    You may leave out values for a given row or column.
    Format Rules for Text Strings
    You can include labels in the table to name the rows and columns. You can also provide a title for the set of rows, for the set of columns, and for the graph as a whole.
    SAP Business Graphics does not accept more than 32 elements per dimension. As a result, you cannot have more than 32 rows or 32 columns in your table.
    Any string of characters not identifiable as a number is assumed to be a label. Labels may occur at the beginnings of a row, at the head of a column, as a title for the rows or columns, or as the graph main title. A non-numeric item placed in among the data values is ignored by the graphics program.
    A legitimate number occurring where a label should be is interpreted as a number. If you want to use labels that look like numbers, you must modify them to contain at least one non-numeric character.
    Placement of labels for row-names or column-names:
    Row-names can occur only at the beginning (left side) of a row.
    Column-names should line up above the columns they are heading, but do not necessarily need to begin in the same column. They should be separated by at least two spaces.
    If you don't adhere to these requirements, the screen capture program attempts to pick out the labels anyway. However, the results may not be what you expect. (Check the selection bars in the Selection view to see if your headers were correctly identified.)
    Placement of titles for rows or columns as a set:
    The title for the rows as a set should be placed directly above the column of row-names.
    The title for the columns as a set should occur directly above the first of the column-names, and begin in exactly the same position.
    The main title for the graph should occur in the very first line of the highlighted area. If you have more text there than just the title, the screen capture program attempts to pick out the string in the center of the line. The longest string in the center of the line separated from other text by double spaces is assumed to be the title.
    The maximum length for a text string cannot be specified exactly since this depends on the size of your window, the resolution of your monitor, and other factors.
    Many strings too long for a small window are displayed correctly when you enlarge the window to full-screen size. In general, you must experiment to find the optimal length for text strings.

  • Need help choosing optimal hardware for a laptop that will run AE CS5

    BACKGROUND INFO (Questions listed below)
    I need some help figuring out the best hardware configuration for a new laptop. I’m a student at a design school and I will be using the laptop to mainly run After Effects and to a lesser extent Photoshop and Illustrator. I don’t need to worry about 3D rendering software or Premier Pro. I am primarily interested in getting the best performance during editing. I am not looking for the best performance for final output. I would prefer to sacrifice final output times for better editing/interface performance.
    I will be working with standard definition content and perhaps HD up to 720p on occasion. I do not need to operate in resolutions higher than 720p. My projects are generally animation and use many sources and many layers.
    I’ve been reading up on optimal hardware configurations for CS5 but my understanding is still a little foggy and I would like to use this thread to figure out how to build the best machine for my budget. The budget is about $1400 to $1600 CANADIAN after tax.
    I DON’T need help finding the laptop. I will search for it on my own. I just need to understand the best hardware to purchase within my budget.
    I realize that it is probably impossible to buy the laptop I want with the hardware configuration I need “off the shelf”. Instead I will be looking for a good base model (~$900-$1100) and I will purchase the necessary hardware upgrades separately. I will not be purchasing a Mac.
    Right now I’m thinking of a machine built something like this:
    - 15”-16” screen (17” models are too big/heavy)
    - Mid to high end i5 processor OR entry level i7 quad core
    - 8GB RAM (I would go to 12GB, but it’s hard to find a 15”-16” laptop with 3 memory slots)
    - SSD to replace HDD (However, if possible, I would like a laptop with dual HDD support or swap the optical drive for another HDD. If I had access to 2 drives, I would have and HDD/SSD combo).
    - Medium/high end NVIDIA GPU to take advantage of OpenGL while editing.
    QUESTIONS
    1) Does the “Render Multiple Frames Simultaneously” option enhance general editing performance (applying filters, scrubbing through the timeline, reverting history states)? Or does it ONLY help speed up RAM previews and final output? Does it reduce the length of RAM previews?
    1b) Is this option even necessary to enable on 64bit systems? (As far as I understand it was used to solve a problem where 32bit systems/software would only recognize 4GB of RAM per instance of AE).
    1c) If I turn this option on to help with RAM previews, would I be hindering general editing performance in any way? Or does this option have basically zero drawbacks?
    2) How come “Actual CPUs that will be used” will read 0 even if the sum total of RAM assigned to the CPUs plus the RAM reserved for other applications is less than the total available system RAM (on a 64bit system)? For example, I currently have 4GB RAM and 2 installed CPUs. I have 1.5GB reserved for other programs and when I set 0.75GB per CPU both CPUs are used. However when I set 1GB per CPU then 0 CPUs are used, even though the total RAM adds up to only 3.5GB.
    CPU
    3.) Considering the fact that I am more concerned with smooth performance while editing rather than final output speeds, would it be better to get a dual core i5 clocked around 2.5 or a quad core i7 clocked around 1.8?
    4.) What is the difference between an i3 and an i5 processor even if they are clocked at the same speed? How does an i5 460M compare on the grand scheme of things?
    RAM
    6.) Should the quantity of RAM that I get (8GB vs 12GB) be based on the number of cores in my CPU? If so, how should I be calculating optimal RAM based on # of cores. Should I also be counting threads, or just actual physical cores?
    STORAGE/SWAP
    7.) Should I replace the HDD with an SSD? I'm looking for snappy interface performance while editing. I would think that if the RAM fills up it would be best to have the SSD for scratch/cache.
    8.) What performance benchmarks are most important when considering an SSD for After Effects? (4k writes? IOs per second? Max read/write?)
    9.) I can afford the OCZ Vertex2 120GB SSD. Would this be a good choice if an SSD is recommended?
    10.) Would it be better to have 8GB of RAM and an SSD, or 12GB of RAM and an HDD? Explain why.
    GPU
    10.) After Effects utilizes OpenGL to enhance editing performance. I will not really be using Premier Pro, so catering to the CUDA Mercury Engine is not a concern. Do high end gaming cards provide significant gains in OpenGL performance? Or do OpenGL performance gains taper off around the mid-range GPUs? (i.e. can you justify buying a high end GTX 260M graphics card for enhanced editing performance versus an "entry level" dedicated card like the 310M?)
    11.) What hardware specs are most important when considering a GPU for editing performance in AE? (Memory size? # of Pixel shaders? Core speed? Shader speed?)
    Thanks so much for any answers you can offer to these questions.

    Please make sure that you've read through this page and what it points to.
    > 1) Does the “Render Multiple Frames Simultaneously” option
    enhance general editing performance (applying filters, scrubbing through
    the timeline, reverting history states)? Or does it ONLY help speed up
    RAM previews and final output? Does it reduce the length of RAM
    previews?
    It only increases rendering speed for RAM previews and rendering for final output. In After Effects CS5, it doesn't decrease the length of RAM previews. (In CS4, it does.)
    1b) Is this option even necessary to enable on 64bit
    systems? (As far as I understand it was used to solve a problem where
    32bit systems/software would only recognize 4GB of RAM per instance of
    AE).
    You misunderstood. Yes, it's still relevant on 64-bit computers. Moreso, in a way. (I'd rather not spend my entire Sunday writing out detailed answers to satisfy idle curiosity, so I'm not going to give all the technical detail to that answer.)
    > 1c) If I turn this option on to help with RAM previews,
    would I be hindering general editing performance in any way? Or does
    this option have basically zero drawbacks?
    It takes a small but nonzero time for the background processes to start up when they need to be used and shut down when they're done. And as they sit waiting, they take up a little bit of memory. So, it's not exactly correct to say that there are no downsides to leaving it on. But it's close. I leave it on.
    > 2) How come “Actual CPUs that will be used” will read 0 even
    if the sum total of RAM assigned to the CPUs plus the RAM reserved for
    other applications is less than the total available system RAM (on a
    64bit system)? For example, I currently have 4GB RAM and 2 installed
    CPUs. I have 1.5GB reserved for other programs and when I set 0.75GB per
    CPU both CPUs are used. However when I set 1GB per CPU then 0 CPUs are
    used, even though the total RAM adds up to only 3.5GB.
    4GB - 1.5GB for other software leaves 2.5 GB for After Effects.
    If you have 1GB assigned per background CPU, then the foreground takes 1.2x that = 1.2GB. That leaves 1.3GB for background processes, which is enough for one background process. There's no point in starting only one background process to do rendering, so it doesn't bother. (Note: When background processes are rendering, the foreground process isn't rendering.)
    > 3.) Considering the fact that I am more concerned with
    smooth performance while editing rather than final output speeds, would
    it be better to get a dual core i5 clocked around 2.5 or a quad core i7
    clocked around 1.8?
    Get the quad-core. That gives you a greater total number of cycles. And After Effects works very well with mutliple processors, even beyond Render Multiple Frames Simultaneously multiprocessing. An entirely unrelated sort of multiprocessing (multithreading) spreads work out to multiple processors.
    > 6.) Should the quantity of RAM that I get (8GB vs 12GB) be based on the number of cores in my CPU? If
    so, how should I be calculating optimal RAM based on # of cores. Should
    I also be counting threads, or just actual physical cores?
    The optimum amount is the amount that you can cram into the computer. I'm not kidding. Spend your budget on RAM until you have 4GB installed per processor (and I'm counting the virtual processors due to hyperthreading). If you have a quad-core, that's 8 CPUs with hyperthreading, so the optimum amount of RAM installed is 32GB. You can work with less, but you did ask about optimum. For HD work (i.e., 1920 pixels across), you're OK with more like 3GB installed per CPU. That's what I have at home: 24GB in a quad-core. You're going to assign 2/3 or so of the RAM to After Effects, so 3GB installed per CPU is 2GB per core for HD work in After Effects.
    > 10.) Would it be better to have 8GB of RAM and an SSD, or 12GB of RAM and an HDD? Explain why.
    12GB of RAM. Because After Effects likes RAM. (If you're thinking of deliberatly using virtual memory to swap memory to the hard disk, don't. That's a performance killer.)
    > 10.) After Effects utilizes OpenGL to enhance editing performance.
    Not really. If you're on a limited budget, don't even think about the GPU until you've already got the most RAM, the fastest CPUs, the largest number of CPUs, two fast hard disks, and a couple of good monitors. Then, and only then, should you even consider getting soemthing beyond a non-stock graphics card. Yes, OpenGL can be used to accelerate some things, but that's only for the low-fidelity preview renderer. (Pardon the bluntness, but I want to make sure that you heed this.)

  • Write optimized DSO

    Hello Guys,
    I have couple of questions regarding write optimized DSO.
    1.Can we use write optimized DSO to load to cube.
    2.Also in the end routine without adding any code when I check the syntax I get a syntax error. The generated code by sap has some syntax errors.When I try to change the dso to standard and generate it again, I don't have the syntax error in the end routine.
    Do anyone encountered this problem.
    Please confirm. As always thanks for your help.
    Senthil

    In the end routine I am getting the following error before writing any abap code:
    E:Explicit length specifications are necessary with types C, P, X, N und
    W in the OO context.
    When u click the error message it takes you to the following data declaration in the code:
      TYPES:
          BEGIN OF  ,
         InfoObject: 0RECORDMODE BW Delta Process: Update Mode.
            RECORDMODE           TYPE RODMUPDMOD,
          END   OF  .
    the above data declaration is generated by SAP.
    Any input really appreciated.

  • Request Deletion from Write-optimized DSO

    Hello,
    With the new Write-optimized technology, it is possible to delete manually "older" requests from the W-O DSO.
    Could anyone of you think of an automated process to delete "old" requests from a W-O DSO (not the entire content , the most recent should still be available.
    For instance : delete everyday the request older than 7 days.
    Already checked solutions :
    - Selective deletion at the administration level of the DSO -> cannot be repeatedly scheduled
    - Copying the Selective Deletion generated program to make one's own program and schedule it (system cannot "remember" the generated program)
    - Diverse SAP Function Module -> do not work for this scenario (like RSSM_DELETE_REQUEST, only for cubes, RSSM_PROCESS_REQUDEL_ODSO or    RSSM_DELETE_REQUEST, where you need to specify the Request number
    - We do not want to include a delete job in a routine at the Transformation level.
    - We do not want to complicate the Data Model by creating a new intermediate DSO allowing to flush the DSO at each load..
    Any other ideas??
    We are on Version 7.0, SP 13.
    Many thanks!
    amanda

    Hi ,
    We have worked on similar business requirement .We wrote a report program in SE38 and running it via a process chain .
    If you want i can help you to write code for same .
    It will be a two step process :
    1.deleting request from RSICCONT so that it get deleted from manage tab.
    2.deleting data from active table of that WDSO .
    Code for Program :
    data  :v_time  type c length 17,
              v_date like sy-datum .
    ( N) = 7  put no of days here before which you want to delete records
    v_date = sy-datum.
    subtract 7 from v_date.
    concatenate  v_date   sy-uzeit  into v_time.
    delete from  rsiccont where icube = wdso_name and timestamp LT v_time .
    delete from  wdso_active_table_name where rstt_tsmp LT v_time.
    It will do your work .
    Regards ,
    Jaya

  • Tips and Tricks on optimizing your app for SDK 2.6

    If there is a thread like this I apologize for starting another one.
    The result from SDK 2.6 was a shellshock for me, but then again, it brings back the old days when I first struggled with the first version of the Flash Packager.
    I hope that we can put all our ways of optimizing apps for SDK 2.6 here.
    I do not claim superior knowledge, just what I have experienced from my own usage.
    Here’s mine [for now]:
    1: Do not use cacheAsBitmap and cacheAsBitmapMatrix when publishing using CPU.
    If your app uses iPhone/iPad combinations, this option will F you up.
    The size of the button will be wrong and all your wonderful "cached in the wrong resolution" size crap [Which shouldn’t happen because cache IS suppose to be a good thing, but SDK 2.6 doesn’t "always" work here for now, now I am not saying that it doesn't work, I am saying that upon certain unpredicatble refresh senerios you might see [I know I have] your buttons or graphics changed size upon redraw.]
    2: Do not use cacheAsBitmap and cacheAsBitmapMatrix in when publishing using GPU.
    Yes you heard that right; because the app will be slow slow slow.
    Basically SDK 2.6 renders these two options [cacheAsBitmap and cacheAsBitmapMatrix] completely useless and in many cases detrimental.
    The very key thing that speeds up apps in older version of the Flash packager is now the slo-mo dragger in the new SDK.
    What then do we use you ask ?
    I ask that question myself.
    But now we have new APIs!
    3: Approaching this new version of the SDK.
    I personally would suggest using this 2.6 SDK as a stepping stone to learning the new APIs but do not use it for actual app publishing unless you really need the multi-task.
    The speed problem will render your app professionally unusable for users comfortable with the reactionary speed of typical iPhone/iPad apps that even older version of the Flash packager can achieve if cacheAsBitmap and cacheAsBitmap matrix are used properly.
    Any more tips to share ?

    TIP: Just have a single ENTER_FRAME listener and add other listeners in a Vector. Example:
    public var enterFrameCallbacks : Vector.<Function>;
    public function initEnterFrameCallbacks():void {
         enterFrameCallbacks = new Vector.<Function>();
         stage.addEventListener( Event.ENTER_FRAME, fireEnterFrameCallbacks, true );
    public function addEnterFrameCallback( callback : Function ):void {
         enterFrameCallbacks.push( callback );
    private function fireEnterFrameCallbacks( e : Event ):void {
         e.stopImmediatePropagation();
         var i : uint;
         for (i = 0; i < enterFrameCallbacks.length; i++) {
              if (enterFrameCallbacks[i] != null) enterFrameCallbacks[i]();
    this is just a simple (untested) example.  Tips to make it even better:
    1) Store callbacks as linked list (faster traversal) instead of vector
    2) Add a 'remove' function
    3) Check for duplicates (if this is going to be a problem)
    4) Pass parameters to the callback (such as dt - or time since last frame)
    5) Implement this handler as a singleton, or a series of static methods, to allow easy access from anywhere
    Note that you may run in to problems with this code if:
    1) One of the callbacks removes itself or another callback from the list (because the vector changes as it is being traversed in the loop)

  • What file size is used for "optimized" upload/download?

    When I upload my photo files from Aperture (and previously from iPhoto) I routinely use the upload setting named "dowloading of optimized" files.  I know that this downsizes the files for faster upload/download, but still gives a file size acceptable for great onscreen presentation and for a resulting small print of someone downloads.  Can anyone tell what downsizing Aperture/iPhoto does in this case?  For example, 50% jpeg, email sized jpeg, etc.  Since Apple will kill the MobileMe galleries in June I have to switch to another photo presentation host.  For upload to such a host Aperture/iPhoto gives choices like 100% jpeg, 50% jpeg, etc.  What I want to do is end up with something similar to what I now get with the "optimized" setting.  Thank you.

    You will note that I referenced both programs (iPhoto & Aperture) in my original message above, and I have posted in both forums.  Since both programs offer the same (at least on the surface) option of "optimized" to MobileMe galleries, one might assume that both use the same or a similar system.  Lengthly testing of my own is the "best" solution only if I can not find anyone who is knowledgeable about what these programs do in the upload.  It has been my sincere joy over many many years that I often find people on these forums who possess knowledge much greater than I.  As to the programs changing the algorithum at any time, I doubt that since this MobileMe function will disappear in June.  Seems like a strange time to make changes. 

  • What are the limitations regarding use of video in an iBook: length, file size, etc?

    What are the limitations regarding use of video in an iBook: length, file size, etc?

    Total book size is 2.0GB max. Length would be relative to size based on quality.
    See:
    - Optimizing performance in your iBooks Author books
    - iBooks Author: Add video to your iBook

  • Is there a script available for arranging elements for optimal use of the printable area?

    I'm starting a sticker printing business using Illustrator as my main layout and illustration too. I was wondering if there's a script available for automatically arranging a set of elements on a page so that they optimally take up the available space. I figured this would save me some on material costs.
    If I were to create a script from scratch, can someone give me pointers? I'm a casual AI user but I have Javascript experience.
    Thanks.

    The below image shows the 4 different die sizes, the artwork fits inside the dotted box so make sure that we have room around the die and the drill holes are assigned as well.
    The name of each die is listed above it, and the size is listed below it.
    The script is set to take each die and place it on the template until it is full, then it will save it and move it to the folder where it will be grabbed and preped for printing.
    Everything below the **** is the script.
    #target illustrator   
        var tempDoc = app.activeDocument;
        copy();
        if (tempDoc.selection.length > 0)
            var myFile = File("S:/TEMPLATES/IN USE/DB_TEMPLATE.ai");
            app.open(myFile);
            var thisDoc = app.activeDocument;
                thisDoc.views[0].centerPoint = [324,503];
                thisDoc.views[0].zoom = .65;
            var allGroups;
            var horizontalCords = [0,180,360,540,0,180,360,540,0,180,360,540,0,180,360,540,0,180,360,540,0];
            var verticalCords = [1012.5,1012.5,1012.5,1012.5,810,810,810,810,607.5,607.5,607.5,607.5,405,405,405,405,202. 5,202.5,202.5,202.5,202.5];
            var oneGroups = new Array();
            var twoGroups = new Array();
            var oneAGroups = new Array();
            var twoAGroups = new Array();
            var makeNew1 = new Array();
            var makeNew2 = new Array();
            var twoACounter = 0;
            var layerRemainder = [4,4,4,4];
            var totalDies = 0;
            var noGo = 0;
            var caseTest;
        if (thisDoc.pageItems.length == 0)
            drawTemplate();
        var newLayer = thisDoc.layers.add();
        newLayer.name = "Die Layer";
        paste();
    allGroups = thisDoc.groupItems;
        for (i=0;i<allGroups.length;i++){// determine what dies are present
            if (allGroups[i].name == "1"){
                oneGroups.push(allGroups[i]);
            else
        if (allGroups[i].name == "1A"){
                oneAGroups.push(allGroups[i]);
        if (allGroups[i].name == "2") {
            twoGroups.push(allGroups[i]);
        else
        if (allGroups[i].name == "2A"){
            twoAGroups.push(allGroups[i]);
        }// end FOR
    if (oneAGroups.length == 1)
        oneAGroups[0].position = [540,101.5];   
    if (twoAGroups.length == 1)
        twoAGroups[0].position = [360.25,101.25];
    if (oneAGroups.length == 1 && twoAGroups.length == 1)
        oneAGroups[0].position = [360,203.5];   
        if(oneAGroups.length == 2){
        var add1Group = thisDoc.groupItems.add();
            add1Group.name = "1";
            oneAGroups[0].name = "1A changed";
            oneAGroups[1].name = "1A changed";
            oneAGroups[0].position = [0,0];
            oneAGroups[1].position = [0,-100.75];
            oneAGroups[0].moveToBeginning(add1Group);
            oneAGroups[1].moveToBeginning(add1Group);       
            oneGroups.push(add1Group);
            oneAGroups.length = 0;
            redraw();
        if(twoAGroups.length == 2){
        var add2Group = thisDoc.groupItems.add();
            add2Group.name = "2";
            twoAGroups[0].name = "2A changed";
            twoAGroups[1].name = "2A changed";
            twoAGroups[0].position = [0,0];
            twoAGroups[1].position = [0,-100.75];
            twoAGroups[0].moveToBeginning(add2Group);
            twoAGroups[1].moveToBeginning(add2Group);
            twoGroups.push(add2Group);
            twoAGroups.length = 0;
            redraw();
        if (twoGroups.length > 0){
                var h = 0;
                var v = 0;
                var dieCount = 0;
                for (i = 0; i < twoGroups.length; i++) {
                    twoGroups[i].position = [horizontalCords[h],verticalCords[v]];
                    h = h + 2;
                    v = v + 2;
                    dieCount++
                    }//end FOR
            }// end twoGroups length IF
            if (oneGroups.length > 0){
            var h = 0+(twoGroups.length*2);
            var v = 0 + (twoGroups.length*2);
             for (i = 0; i < oneGroups.length; i++){
               oneGroups[i].position = [horizontalCords[h],verticalCords[v]];
                  h = h +1;
                    v = v + 1;
                 }//  end FOR
             }//  end onGroups IF
              redraw();//  redraws template so it updates changes on the page
           //thisDoc.close(SaveOptions.SAVECHANGES);
            }//  end noGo
    else
    alert("You have nothing selected...");
        totalDies = (oneGroups.length + (twoGroups.length*2) + (twoAGroups.length/2) + (oneAGroups.length/2));
    if (totalDies < 20)
        caseTest = 0
        else if (totalDies > 20)
        caseTest = 1
        else if (totalDies == 20)
        caseTest = 2;
    switch (caseTest)
            case 0:
                thisDoc.close(SaveOptions.SAVECHANGES);
                break;
            case 1:
                alert ("Die will not fit on the Template at this time.  Try again later.");
                thisDoc.close(SaveOptions.DONOTSAVECHANGES);
                break;
            case 2:
                    var answer = confirm ("The Template is full.  Do you want to clear it?");
                        if (answer == true)
                            printTheTemplate();
                            break;
    function drawTemplate(){  //draws the template if the page is empty
            var tempDoc = app.activeDocument;
        tempDoc.rulerOrigin = [0,0];
        tempDoc.pageOrigin = [0,0];
        var newLayer = tempDoc.layers.add();
        newLayer.name = "Template Layer";
        var templateGroup = tempDoc.groupItems.add();
        templateGroup.name = "Template Build";
        var vertLoc = .25;
        var horzLoc = -.25;
        var topTemp = new Array();
        var botTemp = new Array();
        var leftTemp = new Array();
        var rightTemp = new Array();
        for (i=0;i<5;i++){
            botTemp[i] = thisDoc.pathItems.rectangle(0,vertLoc,.5,21);
            topTemp[i] = thisDoc.pathItems.rectangle(1033,vertLoc,.5,21);
             botTemp[i].moveToBeginning(templateGroup);
             topTemp[i].moveToBeginning(templateGroup);
            vertLoc = vertLoc + 180;
        for (j=0; j<6;j++){
            leftTemp[j] = thisDoc.pathItems.rectangle(horzLoc,-21,21,.5);
            rightTemp[j] = thisDoc.pathItems.rectangle(horzLoc,720,21,.5);
             leftTemp[j].moveToBeginning(templateGroup);
             rightTemp[j].moveToBeginning(templateGroup);
            horzLoc = horzLoc +202.5;
    return
    }//  end function drawTemplate
    function printTheTemplate()
              // October 16, 2012  Henry J. Klementovich
            //var TemplatePath = File("C:/Users/henryk/Desktop/Illustrator Test Files/HS_TEMPLATE.ai");
            //open(TemplatePath)
            //open(myFile);
            var thisDoc = app.activeDocument;
                thisDoc.rulerOrigin = [0,0];
                thisDoc.pageOrigin = [0,0]; 
            var thePath = ("S:/TEMPLATES/PRINTED");
            //  Will only continue if the artist has selected the art.  This step saved a lengthy FOR loop that would have had to select each page or path item of the
            //  document.  Some of the templates had over 5,000 items, resulting in an un-exceptable wait-time for the loop to select them.
            for (x=0;x<thisDoc.groupItems.length;x++)
                thisDoc.groupItems[x].selected = true;
            if (thisDoc.selection.length > 0)
            //  This section pulls the date from the system for two purposes:  the date shown on the template and the date used to save the file to the ArtShare.
            //  They are different b/c system filenames cannot include the  ":" char, which is used on the template for the time object.
            var  theDate = new Date();
                    var day = theDate.getDate();
                    var month = theDate.getMonth() + 1;
                    var year = theDate.getFullYear();
                    var hours = theDate.getHours();
                    var min = theDate.getMinutes();
                        if (min < 10)
                            min = ("0" + min);
                    var morn;
                        if (hours >= 12)
                                hours = hours - 12;
                                morn = " PM";
                        else
                            morn = " AM"
            var saveDate = (month + "-" + day + "-" + year + "    " + hours + min + morn );
            var tempDate = thisDoc.textFrames.add();
                    tempDate.name = theDate;
                    tempDate.contents = (month+ "/" + day + "/ " + year + "    " + hours + ":" + min );
                    tempDate.top = 1026;
                    tempDate.left =40;
                    tempDate.filled = true;
            var actGroups = thisDoc.selection;
            var artGroup = thisDoc.groupItems.add();
                  artGroup.name = "Art Group";
            for (i=0;i<actGroups.length;i++)
                  actGroups[i].moveToEnd(artGroup);
            tempDate.moveToEnd(artGroup);
            artGroup.selected = true;
            copy();
            thisDoc.pageItems.removeAll();//  copies everything from the current template into the clipboard, clears the template, saves it and closes it.
            var layLen = app.activeDocument.layers.length;
            //alert(layLen);
            for (i=0;i<layLen;i++)
            app.activeDocument.layers[0].remove();
            thisDoc.close(SaveOptions.SAVECHANGES);
            app.documents.add();//  adds new document for the template to be saved in the Printed folder.
            paste();
            var thisDoc = app.activeDocument;  
            var saveName = new File (thePath + "/" + saveDate);
                  saveOpts = new IllustratorSaveOptions();
                  saveOpts.compatibility = Compatibility.ILLUSTRATOR13;
                  saveOpts.generateThumbnails = true;
                  saveOpts.preserveEditability = true;
                  thisDoc.saveAs( saveName, saveOpts );
            var actGroups = thisDoc.selection;
            copy();
            thisDoc.close(SaveOptions.SAVECHANGES);
            $.sleep(10);
            app.documents.add();
            paste();
            var nexDoc = app.activeDocument; 
            var actGroups = nexDoc.selection;
            var artGroup =nexDoc.groupItems.add();
                  artGroup.name = "Art Group";
            for (i=0;i<actGroups.length;i++)
                    actGroups[i].moveToEnd(artGroup);
                    //artGroup.selected = true;
                    artGroup.resize(70,70);
                    alert("The current template has been saved to the Printed Templates folder.  This page is to be printed and sent to editing.")
    Hope this helps someone get one created because if they can make one that doesn't depend on the premade dies that would help me out quite a bit because some of the dies that we make are an odd size so we have to place them manually.

  • SPOOL with the right record length

    I have implemented a SQL script to download a "package" and "package body" definition into a file, in a windows environment.
    To avoid "truncate" lines (split in two lines) in the file , I use the "SET LINESIZE v_max_length", where v_max_length is the max length of a record in the package.
    But if a record is " a:=1;", in the file I save "a:=1; ...............................", a record of "v_max_length" length.
    How can I get the right length?
    Example:
    SET LINESIZE 200
    SPOOL my_package.sql
    select RTRIM(TEXT) from ALL_SOURCE where name ='MY_PACKAGE' and TYPE=''PACKAGE'' order by LINE asc;
    SPOOL OFF;
    Edited by: pacoKAS on 25-mar-2010 7:23
    Edited by: pacoKAS on 25-mar-2010 7:24

    Why don't you just set linesize to 1000 and
    SET TRIMSPOOL ON
    Then you get optimal output for each line
    Regards
    Marcus

  • Length of TOKEN_INFO column

    Could someone please confirm the maximum length of the token_info column. It used to be 4k, but I think that's changed.
    I want to write a procedure to calculate fragmentation that isn't quite as lengthy to run as ctx_report.index_stats.
    Thanks.

    The token_info column of the dr$...$i table is a blob, so once again any limitations depend upon how you display it. I provided an example below. Can you provide a little more detail about what you are trying to do and how?
    SCOTT@10gXE> CREATE TABLE test_tab (test_col VARCHAR2 (30))
      2  /
    Table created.
    SCOTT@10gXE> INSERT ALL
      2  INTO test_tab VALUES ('test1')
      3  INTO test_tab VALUES ('test2')
      4  INTO test_tab VALUES ('test3')
      5  SELECT * FROM DUAL
      6  /
    3 rows created.
    SCOTT@10gXE> CREATE INDEX test_idx ON test_tab (test_col)
      2  INDEXTYPE IS CTXSYS.CONTEXT
      3  /
    Index created.
    SCOTT@10gXE> DESC dr$test_idx$i
    Name                                                  Null?    Type
    TOKEN_TEXT                                            NOT NULL VARCHAR2(64)
    TOKEN_TYPE                                            NOT NULL NUMBER(3)
    TOKEN_FIRST                                           NOT NULL NUMBER(10)
    TOKEN_LAST                                            NOT NULL NUMBER(10)
    TOKEN_COUNT                                           NOT NULL NUMBER(10)
    TOKEN_INFO                                                     BLOB
    SCOTT@10gXE> SELECT token_text, token_type FROM dr$test_idx$i
      2  /
    TOKEN_TEXT                                                       TOKEN_TYPE
    TEST1                                                                     0
    TEST2                                                                     0
    TEST3                                                                     0
    SCOTT@10gXE> SELECT CTX_REPORT.TOKEN_INFO ('test_idx', 'test2', 0)
      2  FROM   DUAL
      3  /
    CTX_REPORT.TOKEN_INFO('TEST_IDX','TEST2',0)
    ===========================================================================
                           TOKEN INFO FOR TEST2 (0:TEXT)
    ===========================================================================
    index:      "SCOTT"."TEST_IDX"
    base table: "SCOTT"."TEST_TAB"
    $I table:   "SCOTT"."DR$TEST_IDX$I"
                        ROW 1 ($I ROWID AAANCWAABAAAKu6AAB)
      DOCID COUNT: 1           FIRST: 2           LAST: 2
      DOCID: 2 (AAANCUAABAAAKuiAAB)  BYTE: 1  LENGTH: 3  FREQ: 1
        AT POSITIONS:  1
    ===========================================================================
                                 TOKEN STATISTICS
    ===========================================================================
    Total $I rows:                       1
    Total docids:                        1
    Total occurrences:                   1
    Total token_info size:               3
    Total garbage size:                  0 (0.00%)
    Optimal $I rows:                     1
    Row fragmentation:                   0.00%
                                  MIN            MAX          AVERAGE
    Docids per $I row       :            1              1           1.00
    Bytes per $I row        :            3              3           3.00
    Occurrences per docid   :            1              1           1.00
    Bytes per docid         :            3              3           3.00
    Occ bytes per docid     :            1              1           1.00
    SCOTT@10gXE> VARIABLE g_ref REFCURSOR
    SCOTT@10gXE> DECLARE
      2    v_clob CLOB;
      3  BEGIN
      4    CTX_REPORT.INDEX_STATS ('test_idx', v_clob);
      5    OPEN :g_ref FOR SELECT v_clob FROM DUAL;
      6  END;
      7  /
    PL/SQL procedure successfully completed.
    SCOTT@10gXE> PRINT g_ref
    :B1
    ===========================================================================
                         STATISTICS FOR "SCOTT"."TEST_IDX"
    ===========================================================================
    indexed documents:                                                      3
    allocated docids:                                                       3
    $I rows:                                                                3
                                 TOKEN STATISTICS
    unique tokens:                                                          3
    average $I rows per token:                                           1.00
    tokens with most $I rows:
      TEST3 (0:TEXT)                                                        1
      TEST2 (0:TEXT)                                                        1
      TEST1 (0:TEXT)                                                        1
    average size per token:                                                 3
    tokens with largest size:
      TEST3 (0:TEXT)                                                        3
      TEST2 (0:TEXT)                                                        3
      TEST1 (0:TEXT)                                                        3
    average frequency per token:                                         1.00
    most frequent tokens:
      TEST3 (0:TEXT)                                                        1
      TEST2 (0:TEXT)                                                        1
      TEST1 (0:TEXT)                                                        1
    token statistics by type:
      token type:                                                      0:TEXT
        unique tokens:                                                      3
        total rows:                                                         3
        average rows:                                                    1.00
        total size:                                                         9
        average size:                                                       3
        average frequency:                                               1.00
        most frequent tokens:
          TEST3                                                             1
          TEST2                                                             1
          TEST1                                                             1
                             FRAGMENTATION STATISTICS
    total size of $I data:                                                  9
    $I rows:                                                                3
    estimated $I rows if optimal:                                           3
    estimated row fragmentation:                                          0 %
    garbage docids:                                                         0
    estimated garbage size:                                                 0
    most fragmented tokens:
      TEST3 (0:TEXT)                                                      0 %
      TEST2 (0:TEXT)                                                      0 %
      TEST1 (0:TEXT)                                                      0 %
    SCOTT@10gXE>

  • Optimal INI_TRANS value

    I have a large table (300 million rows, 150GB in size) with many concurrent users update/insert/select for update nowait on it and I'm seeing several "ORA-00054: resource busy and acquire with NOWAIT specified" errors in the application logs. I have confirmed that the ORA-00054 errors were not caused by locking on the same records. I tried to increase the INI_TRANS value for this particular table to 10, up from 2, and the ORA-00054 erros went away. So I'm wondering is there a good way to make an educated guess on the optimal value of INI_TRANS for a particular table?

    I appreciate the responses. Am i correct to say that ITL slots will take up space in the PCTFREE portion of a block or somewhere else? Is there a way to see the records that made up a particular block?
    Shortage of INI_TRANS will indeed cause ORA-00054 error. The following is a test case to simulate the error:
    $ sqlplus / as sysdba
    SQL*Plus: Release 10.2.0.5.0 - Production on Wed Apr 3 23:26:35 2013
    Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    SQL> show parameter block_size
    NAME                                 TYPE        VALUE
    db_block_size                        integer     8192
    SQL> create table t1 (c1 varchar2(1336));
    Table created.
    SQL> select table_name, status, pct_free, ini_trans from user_tables where table_name='T1';
    TABLE_NAME                     STATUS     PCT_FREE  INI_TRANS
    T1                             VALID            10          1
    SQL> insert into t1 values ('a');
    1 row created.
    SQL> insert into t1 values ('b');
    1 row created.
    SQL> insert into t1 values ('c');
    1 row created.
    SQL> insert into t1 values ('d');
    1 row created.
    SQL> insert into t1 values ('e');
    1 row created.
    SQL> insert into t1 values ('f');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update t1 set c1=rpad(c1,1335,c1) where length(c1)=1;
    6 rows updated.
    SQL> commit;
    Commit complete.
    SQL> select substr(c1,1,1) from t1 where c1 like 'a%' for update nowait;
    SUBS
    a
    SQL>
    *From another session:*
    SQL> select substr(c1,1,1) from t1 where c1 like 'b%' for update nowait;
    *From another session:*
    SQL> select substr(c1,1,1) from t1 where c1 like 'c%' for update nowait;
    select substr(c1,1,1) from t1 where c1 like 'c%' for update nowait
    ERROR at line 1:
    ORA-00054: resource busy and acquire with NOWAIT specifiedThe USER_TABLES INI_TRANS reported "1", but I read somewhere that it is a data dictionary inconsistency and it should be the default "2".

  • Exchange Log Shipping Replay queue length monitor

    Hi Guys,
    Can anyone tell me, what king of monitor is Log shipping replay queue length monitor??
    Is it a average threshold monitor or consecutive samples over threshold monitor?
    Thanks

    Hi,
    This monitor is optimized for the CCR scenario and raises an alert if the number of transaction logs waiting to be committed is greater than 15 logs and has been waiting for more than 5 minutes. Therefore, it is a Consecutive Samples over Threshold.
    You can also get the answer from Microsoft Exchange Server 2007 Management Pack Guide document (Page 72)
    http://download.microsoft.com/download/1/E/D/1ED18BCA-B96D-4184-89DB-EDD9A77E5040/OM2007_MP_EX2007_SP1.doc
    Niki Han
    TechNet Community Support

Maybe you are looking for

  • A Photoshop PDF doesn't look the same inside a PDF Portfolio

    Hi, I have a Photoshop PDF file that looks good on screen and includes re-touching I did to it. However, when I place it in a PDF portfolio, the edits aren't there anymore. Anyone know what might be going on? Thanks!

  • White Screen, NOT the shift-w issue

    FCP 5.1.4 on an Intel Mac. I have a slew of QT's in the sequence, that will work for a while, but then my sequence viewer will go to white. For no reason. At first I thought it was the shift-w thing, but once being conscious of it, I became aware of

  • Exchange Rate Type R - Closing Exchange Rate

    Hello Experts, I am current developing ABAP AP Aging reports.  I would like to ask what is the reason for defining Exchange Rate Type (R) - month end exchange rate instead of using the Standard Average Rate M.  Thanks in advance.

  • Converting field NAME1 to fields NAME_LAST and NAME_FIRST

    Hi to all of you, I have a question concerning to the next issue: When we run transaction JUCDCM (Convert customer and vendor masters to SAP Business Partner), the information contained in field NAME1 from customer/vendor master is migrated to field

  • Copy cells with borders, colours etc

    I am doing a financial reconciliation for several months of data.  I have formatted the first month - it takes 12 columns x 24 rows.  Various cells within this area have borders or colours assigned to them.  Now that I have the first month, I want to