Quantcast
Channel: RESPEC
Viewing all 59 articles
Browse latest View live

How to Easily Clone an Oracle BRM Test Database

$
0
0

Alex Bramford - SSG Blog - Oracle BRM and Informatica Data ManagementCloning a database for BRM is not merely a matter of making a copy of the database. Integration must be enforced so that all communication stays intact. In the following steps, we will walk through cloning from one BRM database (Source Machine) to a new database (Target Machine).

Clone database

Source – extract

  • We will use the Oracle Data Pump Export with parameter files to write the BRM/IFW/JSA schemas to a dumpfile for transportation to the target.
  • Disk space needs to be verified to make sure there is enough room for the dumpfiles. These steps are outlined in the The Into Blog to this series.
  • On the source machine, run the following SQL to determine the directory object to which the dumpfile will be written. In this example we will write the dumpfile to DMPDIR=/export/home/oracle/tmp :
column directory_name format a40;
column directory_path format a50;
SELECT directory_name, directory_path FROM dba_directories order by directory_name;

DIRECTORY_NAME                           DIRECTORY_PATH
---------------------------------------- --------------------------------------------------
...
DMPDIR                                   /export/home/oracle/tmp
...

Export the PIN schema using Oracle datapump export with a PARFILE

PARFILE settings explained

USERID=PINDEV2/PINDEV2
  • Log in to the Oracle database.
SCHEMAS=PINDEV2
  • Specifies that only the objects in the PINDEV2 schema are to be exported.
CONTENT=ALL
  • Unloads both data and metadata.
DIRECTORY=DMPDIR
  • Write the dumpfile to the path stored in the DMPDIR directory object, i.e. /export/home/oracle/tmp.
DUMPFILE=pin74_dev_clone.dmp
  • Name of the dumpfile containing the exported data and metadata.
LOGFILE=pin74_dev_clone.log
REUSE_DUMPFILES=Y
  • If the source database is 11G then you can specify REUSE_DUMPFILES=Y to overwrite the dumpfile. If the source database is 10G or lower, the data files must be managed using the O/S.

Perform the cloning

cd /export/home/oracle/cloning
vi options_exp_pindb.par
USERID=PINDEV2/PINDEV2
SCHEMAS=PINDEV2
CONTENT=ALL
DIRECTORY=DMPDIR
DUMPFILE=pin74_dev_clone.dmp
LOGFILE=pin74_dev_clone.log
#REUSE_DUMPFILES=Y
expdp PARFILE=options_exp_pindb.par

Export the IFW schema

vi options_exp_ifw.par
USERID=IFWDEV2/IFWDEV2
SCHEMAS=IFWDEV2
CONTENT=ALL
DIRECTORY=DMPDIR
DUMPFILE=ifw74_dev_clone.dmp
LOGFILE=ifw74_dev_clone.log
#REUSE_DUMPFILES=Y
expdp PARFILE=options_exp_ifw.par

Export the JSA schema

vi options_exp_jsa.par
USERID=JSADEV2/JSADEV2
SCHEMAS=JSADEV2
CONTENT=ALL
DIRECTORY=DMPDIR
DUMPFILE=jsa74_dev_clone.dmp
LOGFILE=jsa74_dev_clone.log
#REUSE_DUMPFILES=Y
expdp PARFILE=options_exp_jsa.par

Outputs of export

  • Dumpfiles and logs are written to /export/home/oracle/tmp.
ls -l /export/home/oracle/tmp
total 305506
drwxr-xr-x   2 oracle   dba          512 Jan 30 12:13 ./
drwxr-xr-x  35 oracle   dba         2048 Jan 30 12:13 ../
-rw-r-----   1 oracle   dba      6774784 Jan 30 12:20 ifw74_dev_clone.dmp
-rw-r--r--   1 oracle   dba        12485 Jan 30 12:20 ifw74_dev_clone.log
-rw-r-----   1 oracle   dba       507904 Jan 30 12:18 jsa74_dev_clone.dmp
-rw-r--r--   1 oracle   dba         2420 Jan 30 12:18 jsa74_dev_clone.log
-rw-r-----   1 oracle   dba      148963328 Jan 30 12:19 pin74_dev_clone.dmp
-rw-r--r--   1 oracle   dba        54548 Jan 30 12:19 pin74_dev_clone.log

Datapump compatibility issue: (10g to 11g)

  • Datapump fails with the following errors when importing into an 11g database from a 10g export over a database link.
  • This is a known issue (Doc ID 1062428.1) for which the solution is to apply the latest patch set to the 10g database, e.g.: 10.2.0.4 or 10.1.0.5.
ORA-39006: internal error
ORA-39113: Unable to determine database version
ORA-04052: error occurred when looking up remote object S YS .D BMS _UTI LI TY@<tns alias>
ORA-00604: error occurred at recursive S QL level 3
ORA-06544: PL/S QL: internal error, arguments: [55916], [], [], [], [], [], [], []
ORA-06553: PLS -801: internal error [55916]
ORA-02063: preceding 2 lines from <tns alias>
ORA-39097: Data Pump job encountered unexpected error -4052
ORA-39006: internal error
ORA-06512: at "S YS .D BMS _S YS _ERROR", line 79
ORA-06512: at "S YS .D BMS _D ATAPUMP", line 3444
ORA-06512: at "S YS .D BMS _D ATAPUMP", line 5233
ORA-06512: at line 2
  • In this post, since we are importing from 10g to 11g, we will forgo the database link and simply SCP the dumpfile to the target server.

Copy the dumpfiles to the target

  • Work on the target database server.
  • On the target machine, run the following SQL to determine the directory to which the dumpfiles will be copied.
  • In this example we will write the dumpfile to DMPDIR=/export/home/oracle/tmp
column directory_name format a40;
column directory_path format a50;
SELECT directory_name, directory_path FROM dba_directories order by directory_name;
DIRECTORY_NAME                           DIRECTORY_PATH
---------------------------------------- --------------------------------------------------
BRMREPORTS                               /home/oracle/BRMREPORTS
DATA_PUMP_DIR                            /opt/oracle/admin/pindb/dpdump/
DMPDIR                                   /home/oracle/tmp
ORACLE_OCM_CONFIG_DIR                    /opt/oracle/product/11.2.0/dbhome/ccr/state
XMLDIR                                   /ade/b/2125410156/oracle/rdbms/xml   

cd /home/oracle/cloning/pindev3
scp oracle@sourcemachine:/export/home/oracle/tmp/*.dmp /home/oracle/tmp/.
ls -l /home/oracle/tmp
total 152584
-rw-r----- 1 oracle oinstall   6774784 Jan 30 13:30 ifw74_dev_clone.dmp
-rw-r----- 1 oracle oinstall    507904 Jan 30 13:30 jsa74_dev_clone.dmp
-rw-r----- 1 oracle oinstall 148963328 Jan 30 13:31 pin74_dev_clone.dmp

Create target schemas

The target schemas must be created in this order:

  1. PIN
  2. JSA
  3. IFW

Create pin schema

vi drop_pindb.sql
drop tablespace PINDEV3 INCLUDING CONTENTS;
drop tablespace PINDEV3X INCLUDING CONTENTS;
drop user PINDEV3 cascade;

purge DBA_RECYCLEBIN;
quit;
vi create_pindb.sql
create tablespace PINDEV3 datafile '/data1/oradata/pindb/PINDEV3.dbf' size 600M
reuse autoextend on next 200M maxsize 10G default
storage( initial 64K next 64K pctincrease 0);

create tablespace PINDEV3X datafile '/data1/oradata/pindb/PINDEV3X.dbf' size 400M
reuse autoextend on next 200M maxsize 10G default
storage( initial 64K next 64K pctincrease 0);

create user PINDEV3 identified by PINDEV3
default tablespace PINDEV3
temporary tablespace pintemp
quota unlimited on PINDEV3
quota unlimited on PINDEV3X;

grant dba to PINDEV3;
grant execute on dbms_lock to PINDEV3;

quit;
sqlplus "system/oracle as sysdba" @drop_pindb.sql
sqlplus "system/oracle as sysdba" @create_pindb.sql

Create JSA schema

vi drop_jsa.sql
drop user JSADEV3 cascade;

drop role ROLE_JSADEV3_SEL;
drop role ROLE_JSADEV3_ALL;

drop tablespace JSADEV3_DAT  including contents and datafiles;
drop tablespace JSADEV3_IDX  including contents and datafiles;

purge dba_recyclebin;
quit;
vi create_jsa.sql
Click here for an example 'create_jsa.sql' file
--------------------------------------------------------------------------------
-- jsaDEV3_tablespaces.sql
--------------------------------------------------------------------------------
create tablespace JSADEV3_DAT datafile '/data1/oradata/pindb/JSADEV3_DAT.dbf' size 10m autoextend on next 100m maxsize 10g;
create tablespace JSADEV3_IDX datafile '/data1/oradata/pindb/JSADEV3_IDX.dbf' size 5m autoextend on next 100m maxsize 10g;

--------------------------------------------------------------------------------
-- jsaDEV3_roles.sql
--------------------------------------------------------------------------------
create role ROLE_JSADEV3_SEL;
grant create session to ROLE_JSADEV3_SEL;

-- ------------------------------------------------------------
--   role: ROLE_JSADEV3_ALL
-- ------------------------------------------------------------
create role ROLE_JSADEV3_ALL;
grant connect              to ROLE_JSADEV3_ALL;
grant create table         to ROLE_JSADEV3_ALL;
grant create view          to ROLE_JSADEV3_ALL;
grant create synonym       to ROLE_JSADEV3_ALL;
grant create any index     to ROLE_JSADEV3_ALL;
grant create sequence      to ROLE_JSADEV3_ALL;
grant create cluster       to ROLE_JSADEV3_ALL;
grant create database link to ROLE_JSADEV3_ALL;
grant alter  session       to ROLE_JSADEV3_ALL;
grant ROLE_JSADEV3_SEL     to ROLE_JSADEV3_ALL;

-- ------------------------------------------------------------
--   user: jsa
-- ------------------------------------------------------------
create user JSADEV3 identified by JSADEV3
        default tablespace JSADEV3_DAT
        temporary tablespace TEMP
        quota unlimited on JSADEV3_DAT
        quota unlimited on JSADEV3_IDX;

grant ROLE_JSADEV3_ALL to JSADEV3 with admin option;
grant create public synonym to JSADEV3;

quit;


Create ifw schema

vi drop_ifw.sql
Click here for an example 'drop_ifw.sql' file
drop user IFWDEV3 cascade;

drop role IFWDEV3_ROLE_SEL;
drop role IFWDEV3_ROLE_ALL;

drop tablespace AGGREGATE_IFWDEV3_TS_1_DAT including contents and datafiles;
drop tablespace AGGREGATE_IFWDEV3_TS_1_IDX including contents and datafiles;

drop tablespace IFWDEV3_TS_1_DAT including contents and datafiles;
drop tablespace IFWDEV3_TS_1_IDX including contents and datafiles;
drop tablespace IFWDEV3_TS_2_DAT including contents and datafiles;
drop tablespace IFWDEV3_TS_2_IDX including contents and datafiles;
drop tablespace IFWDEV3_TS_3_DAT including contents and datafiles;
drop tablespace IFWDEV3_TS_3_IDX including contents and datafiles;
drop tablespace IFWDEV3_TS_4_DAT including contents and datafiles;
drop tablespace IFWDEV3_TS_4_IDX including contents and datafiles;

purge dba_recyclebin;
quit;


vi create_ifw.sql
Click here for an example 'create_ifw.sql' file
-- ------------------------------------------------------------
--   tablespaces: aggregate
-- ------------------------------------------------------------
create tablespace AGGREGATE_IFWDEV3_TS_1_DAT datafile '/data1/oradata/pindb/AGGREGATE_IFWDEV3_TS_1_DAT.dbf' size   5m autoextend on next 200m maxsize 10g;
create tablespace AGGREGATE_IFWDEV3_TS_1_IDX datafile '/data1/oradata/pindb/AGGREGATE_IFWDEV3_TS_1_IDX.dbf' size  10m autoextend on next 200m maxsize 10g;

-- ------------------------------------------------------------
--   tablespaces: integrate
-- ------------------------------------------------------------
create tablespace IFWDEV3_TS_1_DAT datafile '/data1/oradata/pindb/IFWDEV3_TS_1_DAT.dbf' size  25m autoextend on next 200m maxsize 10g;
create tablespace IFWDEV3_TS_1_IDX datafile '/data1/oradata/pindb/IFWDEV3_TS_1_IDX.dbf' size  15m autoextend on next 200m maxsize 10g;

create tablespace IFWDEV3_TS_2_DAT datafile '/data1/oradata/pindb/IFWDEV3_TS_2_DAT.dbf' size 200m autoextend on next 200m maxsize 10g;
create tablespace IFWDEV3_TS_2_IDX datafile '/data1/oradata/pindb/IFWDEV3_TS_2_IDX.dbf' size 200m autoextend on next 200m maxsize 10g;

create tablespace IFWDEV3_TS_3_DAT datafile '/data1/oradata/pindb/IFWDEV3_TS_3_DAT.dbf' size 200m autoextend on next 200m maxsize 10g;
create tablespace IFWDEV3_TS_3_IDX datafile '/data1/oradata/pindb/IFWDEV3_TS_3_IDX.dbf' size 200m autoextend on next 200m maxsize 10g;

create tablespace IFWDEV3_TS_4_DAT datafile '/data1/oradata/pindb/IFWDEV3_TS_4_DAT.dbf' size 200m autoextend on next 200m maxsize 10g;
create tablespace IFWDEV3_TS_4_IDX datafile '/data1/oradata/pindb/IFWDEV3_TS_4_IDX.dbf' size 200m autoextend on next 200m maxsize 10g;

-- ------------------------------------------------------------
--   role: IFWDEV3_ROLE_SEL
-- ------------------------------------------------------------
create role IFWDEV3_ROLE_SEL;
grant create session to IFWDEV3_ROLE_SEL;
grant role_jsa_sel to IFWDEV3_ROLE_SEL;
grant role_jsa_all to IFWDEV3_ROLE_SEL;

-- ------------------------------------------------------------
--   role: IFWDEV3_ROLE_ALL
-- ------------------------------------------------------------
create role IFWDEV3_ROLE_ALL;
grant connect to IFWDEV3_ROLE_ALL;
grant IFWDEV3_ROLE_SEL to IFWDEV3_ROLE_ALL;

-- ------------------------------------------------------------
--   user: integrate
-- ------------------------------------------------------------
create user IFWDEV3 identified by IFWDEV3
        default tablespace IFWDEV3_TS_1_DAT
        temporary tablespace TEMP
        quota unlimited on IFWDEV3_TS_1_DAT
        quota unlimited on IFWDEV3_TS_1_IDX
        quota unlimited on IFWDEV3_TS_2_DAT
        quota unlimited on IFWDEV3_TS_2_IDX
        quota unlimited on IFWDEV3_TS_3_DAT
        quota unlimited on IFWDEV3_TS_3_IDX
        quota unlimited on IFWDEV3_TS_4_DAT
        quota unlimited on IFWDEV3_TS_4_IDX
        quota unlimited on AGGREGATE_IFWDEV3_TS_1_DAT
        quota unlimited on AGGREGATE_IFWDEV3_TS_1_IDX;

grant IFWDEV3_ROLE_ALL to IFWDEV3 with admin option;
grant create public synonym to IFWDEV3;
grant drop   public synonym to IFWDEV3;
grant create view           to IFWDEV3;
grant create sequence       to IFWDEV3;
grant create table          to IFWDEV3;
grant create any index      to IFWDEV3;
grant create procedure      to IFWDEV3;

quit;


Pipeline Synonyms

  • Typically the synonyms are public but this will not work if more than one Pipeline instance is set up on a database.
  • By default, the scripts for JSA and IFW will create public synonyms. These will need to be adjusted if public synonyms cannot be used.

IFW Synonyms

vi IFW_synonyms.sql
Click here for an example 'IFW_synonyms.sql' File

——————————————————————————–

– select ‘create public synonym ‘ || object_name || ‘ for ‘ || object_name || ‘;’

– from user_objects

– where object_type in ( ‘TABLE’, ‘SEQUENCE’, ‘VIEW’ )

– order by object_type;

– ============================================================

– SEQUENCES

– ============================================================

create public synonym IFW_SEQ_AGGREGATION for IFW_SEQ_AGGREGATION;

create public synonym IFW_SEQ_CALENDAR for IFW_SEQ_CALENDAR;

create public synonym IFW_SEQ_CLASS for IFW_SEQ_CLASS;

create public synonym IFW_SEQ_CLASSCON for IFW_SEQ_CLASSCON;

create public synonym IFW_SEQ_DAYCODE for IFW_SEQ_DAYCODE;

create public synonym IFW_SEQ_DISCOUNTCONFIG for IFW_SEQ_DISCOUNTCONFIG;

create public synonym IFW_SEQ_DISCOUNTMODEL for IFW_SEQ_DISCOUNTMODEL;

create public synonym IFW_SEQ_DISCOUNTSTEP for IFW_SEQ_DISCOUNTSTEP;

create public synonym IFW_SEQ_FIELD_ID for IFW_SEQ_FIELD_ID;

create public synonym IFW_SEQ_RATEPLAN for IFW_SEQ_RATEPLAN;

create public synonym IFW_SEQ_PRICEMODEL for IFW_SEQ_PRICEMODEL;

create public synonym IFW_SEQ_NO for IFW_SEQ_NO;

create public synonym IFW_SEQ_NETWORKMODEL for IFW_SEQ_NETWORKMODEL;

create public synonym IFW_SEQ_GROUPING_CNF for IFW_SEQ_GROUPING_CNF;

create public synonym IFW_SEQ_GROUPING for IFW_SEQ_GROUPING;

create public synonym IFW_SEQ_GRANTED_DISCOUNT for IFW_SEQ_GRANTED_DISCOUNT;

create public synonym IFW_SEQ_GEOMODEL for IFW_SEQ_GEOMODEL;

create public synonym IFW_SEQ_GENERIC for IFW_SEQ_GENERIC;

create public synonym IFW_SEQ_ZONEMODEL for IFW_SEQ_ZONEMODEL;

create public synonym IFW_SEQ_TIMEZONE for IFW_SEQ_TIMEZONE;

create public synonym IFW_SEQ_TIMEMODEL for IFW_SEQ_TIMEMODEL;

create public synonym IFW_SEQ_TIMEINTERVAL for IFW_SEQ_TIMEINTERVAL;

create public synonym IFW_SEQ_SYSTEMBRAND for IFW_SEQ_SYSTEMBRAND;

create public synonym IFW_SEQ_SPECIALDAYRATE for IFW_SEQ_SPECIALDAYRATE;

create public synonym IFW_SEQ_SEMAPHORE for IFW_SEQ_SEMAPHORE;

create public synonym IFW_SEQ_SCENARIO for IFW_SEQ_SCENARIO;

create public synonym IFW_SEQ_DISCOUNTTRIGGER for IFW_SEQ_DISCOUNTTRIGGER;

create public synonym IFW_SEQ_DISCOUNTRULE for IFW_SEQ_DISCOUNTRULE;

create public synonym IFW_SEQ_DISCOUNTMASTER for IFW_SEQ_DISCOUNTMASTER;

create public synonym IFW_SEQ_DISCOUNTCONDITION for IFW_SEQ_DISCOUNTCONDITION;

create public synonym IFW_SEQ_DISCOUNTBALIMPACT for IFW_SEQ_DISCOUNTBALIMPACT;

create public synonym IFW_SEQ_CONTENTPROVIDER for IFW_SEQ_CONTENTPROVIDER;

create public synonym IFW_SEQ_CONDITION for IFW_SEQ_CONDITION;

create public synonym IFW_SEQ_CLASSITEM for IFW_SEQ_CLASSITEM;

create public synonym IFW_SEQ_MODELSELECTOR for IFW_SEQ_MODELSELECTOR;

create public synonym IFW_SEQ_SELECTORRULE for IFW_SEQ_SELECTORRULE;

create public synonym IFW_SEQ_SELECTORDETAIL for IFW_SEQ_SELECTORDETAIL;

create public synonym IFW_SEQ_SELECTORBLOCK for IFW_SEQ_SELECTORBLOCK;

create public synonym IFW_SEQ_CHANGESET for IFW_SEQ_CHANGESET;

create public synonym IFW_SEQ_CSAUDIT for IFW_SEQ_CSAUDIT;

– ============================================================

– TABLES

– ============================================================

create public synonym IC_DAILY for IC_DAILY;

create public synonym IC_DAILY_ALTERNATE for IC_DAILY_ALTERNATE;

create public synonym IFW_AGGREGATION for IFW_AGGREGATION;

create public synonym IFW_ALIAS_MAP for IFW_ALIAS_MAP;

create public synonym IFW_APN_GROUP for IFW_APN_GROUP;

create public synonym IFW_APN_MAP for IFW_APN_MAP;

create public synonym IFW_CALENDAR for IFW_CALENDAR;

create public synonym IFW_CLASS for IFW_CLASS;

create public synonym IFW_CLASSCON for IFW_CLASSCON;

create public synonym IFW_CLASSCON_LNK for IFW_CLASSCON_LNK;

create public synonym IFW_CLASSITEM for IFW_CLASSITEM;

create public synonym IFW_CLASS_LNK for IFW_CLASS_LNK;

create public synonym IFW_CONDITION for IFW_CONDITION;

create public synonym IFW_CURRENCY for IFW_CURRENCY;

create public synonym IFW_DAYCODE for IFW_DAYCODE;

create public synonym IFW_DESTINDESC for IFW_DESTINDESC;

create public synonym IFW_STANDARD_ZONE for IFW_STANDARD_ZONE;

create public synonym IFW_SPLITTING_TYPE for IFW_SPLITTING_TYPE;

create public synonym IFW_SPECIALDAY_LNK for IFW_SPECIALDAY_LNK;

create public synonym IFW_SPECIALDAYRATE for IFW_SPECIALDAYRATE;

create public synonym IFW_SOCIALNUMBER for IFW_SOCIALNUMBER;

create public synonym IFW_SLA for IFW_SLA;

create public synonym IFW_SERVICE_MAP for IFW_SERVICE_MAP;

create public synonym IFW_SERVICECLASS for IFW_SERVICECLASS;

create public synonym IFW_SERVICE for IFW_SERVICE;

create public synonym IFW_USAGECLASS_MAP for IFW_USAGECLASS_MAP;

create public synonym IFW_USAGECLASS for IFW_USAGECLASS;

create public synonym IFW_UOM_MAP for IFW_UOM_MAP;

create public synonym IFW_UOM for IFW_UOM;

create public synonym IFW_TRUNK_CNF for IFW_TRUNK_CNF;

create public synonym IFW_TRUNK for IFW_TRUNK;

create public synonym IFW_TIMEZONE for IFW_TIMEZONE;

create public synonym IFW_TIMEMODEL_LNK for IFW_TIMEMODEL_LNK;

create public synonym IFW_TIMEMODEL for IFW_TIMEMODEL;

create public synonym IFW_ZONEMODEL for IFW_ZONEMODEL;

create public synonym IFW_USC_GROUP for IFW_USC_GROUP;

create public synonym IFW_USAGETYPE for IFW_USAGETYPE;

create public synonym IFW_TIMEINTERVAL for IFW_TIMEINTERVAL;

create public synonym IFW_TAXGROUP for IFW_TAXGROUP;

create public synonym IFW_TAXCODE for IFW_TAXCODE;

create public synonym IFW_TAX for IFW_TAX;

create public synonym IFW_TAM for IFW_TAM;

create public synonym IFW_SYSTEM_BRAND for IFW_SYSTEM_BRAND;

create public synonym IFW_SWITCH for IFW_SWITCH;

create public synonym IFW_DICTIONARY for IFW_DICTIONARY;

create public synonym IFW_DISCARDING for IFW_DISCARDING;

create public synonym IFW_DISCOUNTDETAIL for IFW_DISCOUNTDETAIL;

create public synonym IFW_DISCOUNTMASTER for IFW_DISCOUNTMASTER;

create public synonym IFW_DISCOUNTMODEL for IFW_DISCOUNTMODEL;

create public synonym IFW_DISCOUNTRULE for IFW_DISCOUNTRULE;

create public synonym IFW_DISCOUNTSTEP for IFW_DISCOUNTSTEP;

create public synonym IFW_EDRC_DESC for IFW_EDRC_DESC;

create public synonym IFW_EDRC_FIELD for IFW_EDRC_FIELD;

create public synonym IFW_EXCHANGE_RATE for IFW_EXCHANGE_RATE;

create public synonym IFW_GLACCOUNT for IFW_GLACCOUNT;

create public synonym IFW_GROUPING for IFW_GROUPING;

create public synonym IFW_GROUPING_CNF for IFW_GROUPING_CNF;

create public synonym IFW_HOLIDAY for IFW_HOLIDAY;

create public synonym IFW_ICPRODUCT for IFW_ICPRODUCT;

create public synonym IFW_ICPRODUCT_CNF for IFW_ICPRODUCT_CNF;

create public synonym IFW_ICPRODUCT_RATE for IFW_ICPRODUCT_RATE;

create public synonym IFW_ISCRIPT for IFW_ISCRIPT;

create public synonym IFW_MAP_GROUP for IFW_MAP_GROUP;

create public synonym IFW_NETWORKMODEL for IFW_NETWORKMODEL;

create public synonym IFW_NOPRODUCT for IFW_NOPRODUCT;

create public synonym IFW_NOPRODUCT_CNF for IFW_NOPRODUCT_CNF;

create public synonym IFW_NOSP for IFW_NOSP;

create public synonym IFW_NO_BILLRUN for IFW_NO_BILLRUN;

create public synonym IFW_PIPELINE for IFW_PIPELINE;

create public synonym IFW_POI for IFW_POI;

create public synonym IFW_POIAREA_LNK for IFW_POIAREA_LNK;

create public synonym IFW_PRICEMODEL for IFW_PRICEMODEL;

create public synonym IFW_POIAREA_LNK for IFW_POIAREA_LNK;

create public synonym IFW_PRICEMODEL for IFW_PRICEMODEL;

create public synonym IFW_QUEUE for IFW_QUEUE;

create public synonym IFW_RATEADJUST for IFW_RATEADJUST;

create public synonym IFW_RATEPLAN for IFW_RATEPLAN;

create public synonym IFW_RATEPLAN_CNF for IFW_RATEPLAN_CNF;

create public synonym IFW_RATEPLAN_VER for IFW_RATEPLAN_VER;

create public synonym IFW_RESOURCE for IFW_RESOURCE;

create public synonym IFW_REVENUEGROUP for IFW_REVENUEGROUP;

create public synonym IFW_RSC_GROUP for IFW_RSC_GROUP;

create public synonym IFW_RULE for IFW_RULE;

create public synonym IFW_RULEITEM for IFW_RULEITEM;

create public synonym IFW_RULESET for IFW_RULESET;

create public synonym IFW_RULESETLIST for IFW_RULESETLIST;

create public synonym IFW_RUM for IFW_RUM;

create public synonym IFW_RUMGROUP for IFW_RUMGROUP;

create public synonym IFW_RUMGROUP_LNK for IFW_RUMGROUP_LNK;

create public synonym IFW_SCENARIO for IFW_SCENARIO;

create public synonym IFW_SEGMENT for IFW_SEGMENT;

create public synonym IFW_SEGRATE_LNK for IFW_SEGRATE_LNK;

create public synonym IFW_SEGZONE_LNK for IFW_SEGZONE_LNK;

create public synonym IFW_REF_MAP for IFW_REF_MAP;

create public synonym IFW_DBVERSION for IFW_DBVERSION;

create public synonym IFW_DUPLICATECHECK for IFW_DUPLICATECHECK;

create public synonym IFW_DSCTRIGGER for IFW_DSCTRIGGER;

create public synonym IFW_IMPACT_CAT for IFW_IMPACT_CAT;

create public synonym IFW_SEQCHECK for IFW_SEQCHECK;

create public synonym IFW_DSCMDL_VER for IFW_DSCMDL_VER;

create public synonym IFW_GEO_MODEL for IFW_GEO_MODEL;

create public synonym IFW_GEOAREA_LNK for IFW_GEOAREA_LNK;

create public synonym IFW_NETWORKOPER for IFW_NETWORKOPER;

create public synonym IFW_ICPRODUCT_GRP for IFW_ICPRODUCT_GRP;

create public synonym IFW_USC_MAP for IFW_USC_MAP;

create public synonym IFW_PRICEMDL_STEP for IFW_PRICEMDL_STEP;

create public synonym IFW_GEO_ZONE for IFW_GEO_ZONE;

create public synonym IFW_SEQLOG_OUT for IFW_SEQLOG_OUT;

create public synonym IFW_DSCMDL_CNF for IFW_DSCMDL_CNF;

create public synonym IFW_DSCCONDITION for IFW_DSCCONDITION;

create public synonym IFW_DSCBALIMPACT for IFW_DSCBALIMPACT;

create public synonym IFW_RSC_MAP for IFW_RSC_MAP;

create public synonym IFW_LERG_DATA for IFW_LERG_DATA;

create public synonym IFW_RSC_MAP for IFW_RSC_MAP;

create public synonym IFW_LERG_DATA for IFW_LERG_DATA;

create public synonym IFW_SEQLOG_IN for IFW_SEQLOG_IN;

create public synonym IFW_CSSTATE for IFW_CSSTATE;

create public synonym IFW_CHANGESET for IFW_CHANGESET;

create public synonym IFW_CSLOCK for IFW_CSLOCK;

create public synonym IFW_CSAUDIT for IFW_CSAUDIT;

create public synonym IFW_CSREFERENCE for IFW_CSREFERENCE;

create public synonym IFW_CIBER_OCC for IFW_CIBER_OCC;

create public synonym IFW_MODEL_SELECTOR for IFW_MODEL_SELECTOR;

create public synonym IFW_SELECTOR_RULESET for IFW_SELECTOR_RULESET;

create public synonym IFW_SELECTOR_RULE for IFW_SELECTOR_RULE;

create public synonym IFW_SELECTOR_RULE_LNK for IFW_SELECTOR_RULE_LNK;

create public synonym IFW_SELECTOR_DETAIL for IFW_SELECTOR_DETAIL;

create public synonym IFW_SELECTOR_BLOCK_LNK for IFW_SELECTOR_BLOCK_LNK;

create public synonym IFW_SELECTOR_BLOCK for IFW_SELECTOR_BLOCK;

– ============================================================

– VIEWS

– ============================================================

create public synonym IFW_ZONE for IFW_ZONE;

– ============================================================

– PROCEDURES

– ============================================================

create or replace public synonym PROC_CHECK_IDX_DUPCHK for PROC_CHECK_IDX_DUPCHK;

create or replace public synonym PROC_CREATE_IDX_DUPCHK for PROC_CREATE_IDX_DUPCHK;

create or replace public synonym PROC_DROP_IDX_DUPCHK for PROC_DROP_IDX_DUPCHK;



 

JSA Synonyms

vi JSA_Synonyms.sql
...
--------------------------------------------------------------------------------
-- $Log: JSA_Synonyms.sql,v $
-- Revision 1.1  2002/02/04 10:28:19
-- Remove public synonyms from create script.
--
--------------------------------------------------------------------------------
-- ============================================================
-- SEQUENCES
-- ============================================================
create public synonym JSA_SEQ_USER_ID for JSA_SEQ_USER_ID;
create public synonym JSA_SEQ_MODULE_ID for JSA_SEQ_MODULE_ID;
create public synonym JSA_SEQ_USERGROUP_ID for JSA_SEQ_USERGROUP_ID;
create public synonym JSA_SEQ_BULLETINBOARD_ID for JSA_SEQ_BULLETINBOARD_ID;

-- ============================================================
-- TABLES
-- ============================================================
create public synonym JSA_BULLETINBOARD for JSA_BULLETINBOARD;
create public synonym JSA_USER for JSA_USER;
create public synonym JSA_USERGROUP for JSA_USERGROUP;
create public synonym JSA_MODULE for JSA_MODULE;
create public synonym JSA_USERRIGHT for JSA_USERRIGHT;
create public synonym JSA_GROUPRIGHT for JSA_GROUPRIGHT;
create public synonym JSA_GROUPUSER for JSA_GROUPUSER;
create public synonym JSA_CONFIG for JSA_CONFIG;

Import the data using Oracle datapump import with a PARFILE

PARFILE settings explained

USERID=system/oracle
  • Log in to the Oracle database.
DUMPFILE=DMPDIR:pin74_dev_clone.dmp
  • DMPDIR specifies the Oracle directory object where the dumpfile is located.
  • Filename specifies the dumpfile (exported and copied from the source database).
LOGFILE=DMPDIR:pin74_dev_import.log
  • DMPDIR specifies the Oracle directory object where the logfile is created.
  • Filename is the name of the logfile.
REMAP_TABLESPACE=PIN74SRC:PINDEV3
  • Source tablespace PIN74SRC becomes target tablespace PINDEV3.
REMAP_SCHEMA=PIN74SRC2:PINDEV3
  • Source schema PIN74SRC becomes target schema PINDEV3.
EXCLUDE=USER
  • Do not re-create the USER PINDEV3, since it’s created in the SQL above.
TRANSFORM=OID:n
  • Prevent ORA-02304  errors by using new object IDs in the target database – do not use the same object IDs that were used in the source database.
  • To retain the object IDs used in the source database, use TRANSFORM=OID:y

PIN schema

cd /home/oracle/cloning/pindev3
vi options_imp_pindb.par
USERID=system/oracle
DUMPFILE=DMPDIR:pin74_dev_clone.dmp
LOGFILE=DMPDIR:pin74_dev_import.log
CONTENT=ALL
REMAP_TABLESPACE=PIN74SRC:PINDEV3
REMAP_TABLESPACE=PIN74SRCX:PINDEV3X
REMAP_SCHEMA=PIN74SRC2:PINDEV3
TRANSFORM=OID:n
EXCLUDE=USER
impdp PARFILE=options_imp_pindb.par

JSA schema

vi options_imp_jsa.par
USERID=system/oracle
DUMPFILE=DMPDIR:jsa74_dev_clone.dmp
LOGFILE=DMPDIR:jsa74_dev_import.log
CONTENT=ALL
TRANSFORM=OID:n
EXCLUDE=USER
REMAP_SCHEMA=JSA74BA2:JSADEV3
REMAP_TABLESPACE=JSA74BA2:JSADEV3
REMAP_TABLESPACE=JSA74BA2X:JSADEV3X
impdp PARFILE=options_imp_jsa.par

IFW schema

vi options_imp_ifw.par
USERID=system/oracle
DUMPFILE=DMPDIR:ifw74_dev_clone.dmp
LOGFILE=DMPDIR:ifw74_dev_clone.log
CONTENT=ALL
SCHEMAS=IFW74BA2
TRANSFORM=OID:n
EXCLUDE=USER
REMAP_SCHEMA=IFW74BA2:IFWDEV3
impdp PARFILE=options_imp_ifw.par

Connectivity

/etc/hosts

  • Ensure there is an entry for the target database server in the /etc/hosts file.
vi /etc/hosts
#
# Internet host table
#
::1     localhost
127.0.0.1       localhost
10.32.64.224    targetmachine    

tnsnames.ora

  • Ensure the target database service names are mapped to connect descriptors in the tnsnames.ora:
vi $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /opt/oracle/oracle/product/11g/client/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

PINDB11G =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = targetmachine)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = pindb)
    )
  )


tnsping PINDB11G

TNS Ping Utility for Solaris: Version 11.2.0.1.0 - Production on 30-JAN-2014 20:43:05

Copyright (c) 1997, 2009, Oracle.  All rights reserved.

Used parameter files:
/opt/oracle/oracle/product/11g/client/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.32.64.225)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = pindb)))
OK (10 msec)

Point DM_ORACLE to the appropriate shared library

  • Link to dm_oracle10g.so if the target database is Oracle 10g
  • Link to dm_oracle11g.so if the target database is Oracle 11g

Note

  •  If you are cloning from a 32-bit environment into a 64-bit environment, by default the Oracle 11g 32-bit libraries are not installed when Oracle database is installed on the 64-bit machine. The Oracle 32-bit libraries will therefor need to be installed into the 64-bit environment.
vi $PIN_HOME/sys/dm_oracle/pin.conf

# - dm dm_sm_obj ${PIN_HOME}/sys/dm_oracle/dm_dm_oracle10g${LIBRARYEXTENSION}
- dm dm_sm_obj ${PIN_HOME}/sys/dm_oracle/dm_oracle11g${LIBRARYEXTENSION}

Extras

  • The following script will automate the import sequence described above, and report preliminary results and Oracle errors:
vi import_brm_schemas.sh
Click here for an example 'import_brm_schemas.sh' file
#!/bin/bash

echo "START========================================================================================================"
rm /home/oracle/tmp/*.log

echo "--------------------------------------------------------------------------------------------------- PINDB"
sqlplus "system/oracle as sysdba" @drop_pindb.sql
sqlplus "system/oracle as sysdba" @create_pindb.sql
impdp PARFILE=options_imp_pindb.par

echo "--------------------------------------------------------------------------------------------------- IFW"
sqlplus "system/oracle as sysdba" @drop_ifw.sql
sqlplus "system/oracle as sysdba" @create_ifw.sql
sqlplus "system/oracle as sysdba" @create_ifw_roles.sql
impdp PARFILE=options_imp_ifw.par

echo "--------------------------------------------------------------------------------------------------- JSA"
sqlplus "system/oracle as sysdba" @drop_jsa.sql
sqlplus "system/oracle as sysdba" @create_jsa.sql
sqlplus "system/oracle as sysdba" @create_jsa_roles.sql
impdp PARFILE=options_imp_jsa.par

echo "--------------------------------------------------------------------------------------------------- quick test"
echo 'select count(*) from account_t;' | sqlplus "pindev3/pindev3"
echo 'select count(*) from ifw_discountmodel;' | sqlplus "ifwdev3/ifwdev3"
echo 'select count(*) from jsa_user;' | sqlplus "jsadev3/jsadev3"

echo ">>>>>---------------------------------------------------------------------------------------------- /home/oracle/tmp/pin_import.log"
grep "ORA-" /home/oracle/tmp/pin_import.log
echo ""
echo ">>>>>---------------------------------------------------------------------------------------------- /home/oracle/tmp/ifw_import.log"
grep "ORA-" /home/oracle/tmp/ifw_import.log
echo ""
echo ">>>>>---------------------------------------------------------------------------------------------- /home/oracle/tmp/jsa_import.log"
grep "ORA-" /home/oracle/tmp/jsa_import.log
echo ""
echo ""
echo "END========================================================================================================"


 

 

chmod +x ./import_brm_schemas.sh
./import_brm_schemas.sh

 

This marks the end of our cloning blog series. If you are interested an even faster, less manual way to clone a BRM environment, we have an automated “push-button” solution that might help your needs. Contact info@ssglimited.com for more information!


Using Expression Variables for Row Over Row Processing

$
0
0

Jennifer Vilches - SSG BlogDownloadable White Paper -

Data processing in Informatica PowerCenter is row-based. But sometimes you just need to reference the data in the previous row. What to do?

Expression transformation variables are one very useful answer. In my white paper, Row Over Row Processing Using Expression Variables, I go over two scenarios for utilizing the power of variables in accessing prior processed row values. The first describes how to attain a running month to date total, while the second concentrates on capturing state change counts.

The key to the solution is the order of evaluation of the expression ports: inputs, then variables, then outputs. So the variables retain their previous values until they are evaluated in order for the current row. The other important factor is providing sorted input. See the white paper for the full explanation and two examples.

Note: To view the white paper, fill out the brief form, and a download link will be sent to your specified email address immediately.

To view other white papers, webinars and more, visit SSG’s resource page.

How to Bill One Account in BRM for Testing

$
0
0

Alex Bramford - SSG Blog - Oracle BRM and Informatica Data ManagementThe BRM billing utilities run in default mode are great for billing large numbers of accounts, but may not be appropriate for situations such as testing, when all that’s required is to verify the billing for a small number of accounts.

The method presented here makes it possible to bill a small number of accounts at a time, requiring only the account poids and the billinfo poids of the accounts to be billed (in this example, a single account). It is not intended to test your entire billing process, but is useful for preparing test accounts or as a quick test of BRM billing.

Important things to note:

  • This method only applies to accounts ready to be billed and does not force a bill through “bill now.” If the accounts in question are not past their billing date of month, this method will not work.
  • This is why we set pin_virtual_time forward, but of course this is not an option for production accounts: remember that time should never be set forward in production!
Pre-billing check

The first step is to check the current pin_virtual_time to know what timing reference to expect. In this example, we are into the next year so expectations are set based on this date.

$pin_virtual_time
mode 1  1446552000  Tue Nov  3 12:00:00 2015

The second step is to create the testnap script to retrieve the pre-billing status:

  • We can use PCM_OP_BAL_GET_ACCT_BILLINFO, which returns contact information in addition to the billinfo for the account, or we can use PCM_OP_SEARCH, which returns only the billinfo data.
  • Create the following input file to do the search:
r << XXX 1
0 PIN_FLD_POID                      POID [0] 0.0.0.1 /search -1 0
0 PIN_FLD_FLAGS                      INT [0] 0
0 PIN_FLD_TEMPLATE                   STR [0] "select X from /billinfo 1, /account 2 where 2.F1 = V1 and 1.F3 = 2.F2 "
0 PIN_FLD_RESULTS                  ARRAY [0] allocated 0, used 0
0 PIN_FLD_ARGS                     ARRAY [1] allocated 1, used 1
1     PIN_FLD_ACCOUNT_NO             STR [0] "12310002301"
0 PIN_FLD_ARGS                     ARRAY [2] allocated 1, used 1
1     PIN_FLD_POID                  POID [0] 0.0.0.1 /account 317975 0
0 PIN_FLD_ARGS                     ARRAY [3] allocated 1, used 1
1     PIN_FLD_ACCOUNT_OBJ           POID [0] 0.0.0.1 /account 317975 0
XXX
xop  PCM_OP_SEARCH 0 1

The third step is to query the account to show the pre-billing status:

testnap billinfo-query-10001.testnap
xop: opcode 7, flags 0
# number of field entries allocated 20, used 2
0 PIN_FLD_POID           POID [0] 0.0.0.1 /search -1 0
0 PIN_FLD_RESULTS       ARRAY [0] allocated 51, used 51
1     PIN_FLD_POID           POID [0] 0.0.0.1 /billinfo 317719 36
1     PIN_FLD_CREATED_T    TSTAMP [0] (1417590365) Wed Dec 03 07:06:05 2014
1     PIN_FLD_MOD_T        TSTAMP [0] (1446552000) Tue Nov 03 12:00:00 2015
1     PIN_FLD_READ_ACCESS     STR [0] "L"
1     PIN_FLD_WRITE_ACCESS    STR [0] "L"
1     PIN_FLD_ACCOUNT_OBJ    POID [0] 0.0.0.1 /account 317975 0
1     PIN_FLD_ACCT_SUPPRESSED    INT [0] 0
1     PIN_FLD_ACTG_CYCLE_DOM    INT [0] 3
1     PIN_FLD_ACTG_FUTURE_T TSTAMP [0] (1451779200) Sun Jan 03 00:00:00 2016
1     PIN_FLD_ACTG_LAST_T  TSTAMP [0] (1446508800) Tue Nov 03 00:00:00 2015
1     PIN_FLD_ACTG_NEXT_T  TSTAMP [0] (1449100800) Thu Dec 03 00:00:00 2015
1     PIN_FLD_ACTG_TYPE      ENUM [0] 2
1     PIN_FLD_ACTUAL_LAST_BILL_OBJ   POID [0] 0.0.0.1 /bill 470242 0
1     PIN_FLD_ACTUAL_LAST_BILL_T TSTAMP [0] (0) <null>
1     PIN_FLD_AR_BILLINFO_OBJ   POID [0] 0.0.0.1 /billinfo 317719 2
1     PIN_FLD_ASSOC_BUS_PROFILE_OBJ_LIST    STR [0] ""
1     PIN_FLD_BAL_GRP_OBJ    POID [0] 0.0.0.1 /balance_group 318231 0
1     PIN_FLD_BILLINFO_ID     STR [0] "Bill Unit(1)"
1     PIN_FLD_BILLING_SEGMENT   ENUM [0] 0
1     PIN_FLD_BILLING_STATE   ENUM [0] 0
1     PIN_FLD_BILLING_STATUS   ENUM [0] 0
1     PIN_FLD_BILLING_STATUS_FLAGS    INT [0] 0
1     PIN_FLD_BILL_ACTGCYCLES_LEFT    INT [0] 1
1     PIN_FLD_BILL_OBJ       POID [0] 0.0.0.1 /bill 469776 0
1     PIN_FLD_BILL_WHEN       INT [0] 1
1     PIN_FLD_BUSINESS_PROFILE_OBJ   POID [0] 0.0.0.0  0 0
1     PIN_FLD_COLLECTION_DATE TSTAMP [0] (1446508800) Tue Nov 03 00:00:00 2015
1     PIN_FLD_CURRENCY        INT [0] 124
1     PIN_FLD_CURRENCY_SECONDARY    INT [0] 0
1     PIN_FLD_EFFECTIVE_T  TSTAMP [0] (1417590365) Wed Dec 03 07:06:05 2014
1     PIN_FLD_EVENT_POID_LIST    STR [0] ""
1     PIN_FLD_EXEMPT_FROM_COLLECTIONS    INT [0] 0
1     PIN_FLD_FUTURE_BILL_T TSTAMP [0] (1451779200) Sun Jan 03 00:00:00 2016
1     PIN_FLD_LAST_BILL_OBJ   POID [0] 0.0.0.1 /bill 470242 0
1     PIN_FLD_LAST_BILL_T  TSTAMP [0] (1446508800) Tue Nov 03 00:00:00 2015
1     PIN_FLD_NEXT_BILL_OBJ   POID [0] 0.0.0.1 /bill 468912 0
1     PIN_FLD_NEXT_BILL_T  TSTAMP [0] (1449100800) Thu Dec 03 00:00:00 2015
1     PIN_FLD_NUM_SUPPRESSED_CYCLES    INT [0] 0
1     PIN_FLD_OBJECT_CACHE_TYPE   ENUM [0] 0
1     PIN_FLD_PARENT_BILLINFO_OBJ   POID [0] 0.0.0.0  0 0
1     PIN_FLD_PARENT_FLAGS    INT [0] 0
1     PIN_FLD_PAYINFO_OBJ    POID [0] 0.0.0.1 /payinfo/invoice 318999 0
1     PIN_FLD_PAYMENT_EVENT_OBJ   POID [0] 0.0.0.1 /event/billing/payment -1 0
1     PIN_FLD_PAY_TYPE       ENUM [0] 10001
1     PIN_FLD_PENDING_RECV DECIMAL [0] 2921.82
1     PIN_FLD_SCENARIO_OBJ   POID [0] 0.0.0.0  0 0
1     PIN_FLD_SPONSOREE_FLAGS    INT [0] 0
1     PIN_FLD_SPONSOR_FLAGS    INT [0] 0
1     PIN_FLD_STATUS         ENUM [0] 10100
1     PIN_FLD_STATUS_FLAGS    INT [0] 8
1     PIN_FLD_SUPPRESSION_CYCLES_LEFT    INT [0] 0
Advance pin_virtual_time one month

In order to trigger billing, we advance pin_virtual_time to past the BDOM of the account (stored in PIN_FLD_ACTG_NEXT_T in the billinfo query above) :

pin_virtual_time -m 1 120312002015
filename /export/home/pin74ba2/opt/portal/lib/pin_virtual_time_file, mode 1, time: Thu Dec  3 12:00:00
Create an input file to bill selected accounts

In the following step, billing will be run for selected test accounts. An optional parameter to pin_bill_accts specifies the input file to provide the /account and /billinfo identifiers of the accounts to be billed. Any number of accounts can be added to the file, which is in XML format.

  • The <Account> tag contains the poid of the /account to be billed.
  • The <Billinfo> tag contains the poid of the /billinfo to be billed.
vi bill-test.xml
<?xml version="1.0" encoding="UTF-8"?>
<BusinessConfiguration
        xmlns="http://www.portal.com/schemas/BusinessConfig"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.portal.com/schemas/BusinessConfig BusinessConfiguration.xsd">
        <BillRunConfiguration>
                <!-- List of Billinfo to be billed -->
                <BillingList>
                        <Account>317975</Account>
                        <Billinfo>317719</Billinfo>
                </BillingList>
        </BillRunConfiguration>
</BusinessConfiguration>

Now we are ready to run billing for our sample account.  Adding the –verbose switch shows the output and extended errors encountered during the billing process.

pin_bill_accts -file bill-test.xml -verbose
Rows(elements) read from file:3 (1)
Fetched (1) units in Total.
Thread (2) begins ...
Thread (5) begins ...
Thread (6) begins ...
Thread (4) begins ...
Thread (3) begins ...
Thread (2) exits ...
Thread (6) exits ...
Thread (4) exits ...
Thread (3) exits ...
Thread (5) exits ...
Total number of records processed = 1.
Number of data errors encountered = 0.
Total number of errors encountered = 0.
Post-billing check

Finally, we query the account again to confirm the results of the billing and the post-billing status, or to identify errors that may have occurred.

In particular, the values of the following fields will have changed, indicating that billing is successful:

  • PIN_FLD_ACTG_LAST_T (indicating the end date of the previous billing cycle) should have been increased by one month.
  • PIN_FLD_ACTG_NEXT_T (indicating the start date for the next bill cycle) should have been increased by one month.
  • A non-zero value on PIN_FLD_PENDING_RECV DECIMAL indicates the balance due for charges for products with non-zero balance impacts.
  • The PIN_FLD_BILL_OBJ holds the poid of the bill that was generated, which can be examined with a call to the PCM_OP_READ_OBJ opcode.

 

testnap billinfo-query-10001.testnap

xop: opcode 7, flags 0
# number of field entries allocated 20, used 2
0 PIN_FLD_POID           POID [0] 0.0.0.1 /search -1 0
0 PIN_FLD_RESULTS       ARRAY [0] allocated 51, used 51
1     PIN_FLD_POID           POID [0] 0.0.0.1 /billinfo 317719 39
1     PIN_FLD_CREATED_T    TSTAMP [0] (1417590365) Wed Dec 03 07:06:05 2014
1     PIN_FLD_MOD_T        TSTAMP [0] (1449144000) Thu Dec 03 12:00:00 2015
1     PIN_FLD_READ_ACCESS     STR [0] "L"
1     PIN_FLD_WRITE_ACCESS    STR [0] "L"
1     PIN_FLD_ACCOUNT_OBJ    POID [0] 0.0.0.1 /account 317975 0
1     PIN_FLD_ACCT_SUPPRESSED    INT [0] 0
1     PIN_FLD_ACTG_CYCLE_DOM    INT [0] 3
1     PIN_FLD_ACTG_FUTURE_T TSTAMP [0] (1454457600) Wed Feb 03 00:00:00 2016
1     PIN_FLD_ACTG_LAST_T  TSTAMP [0] (1449100800) Thu Dec 03 00:00:00 2015
1     PIN_FLD_ACTG_NEXT_T  TSTAMP [0] (1451779200) Sun Jan 03 00:00:00 2016
1     PIN_FLD_ACTG_TYPE      ENUM [0] 2
1     PIN_FLD_ACTUAL_LAST_BILL_OBJ   POID [0] 0.0.0.1 /bill 469776 0
1     PIN_FLD_ACTUAL_LAST_BILL_T TSTAMP [0] (0) <null>
1     PIN_FLD_AR_BILLINFO_OBJ   POID [0] 0.0.0.1 /billinfo 317719 2
1     PIN_FLD_ASSOC_BUS_PROFILE_OBJ_LIST    STR [0] ""
1     PIN_FLD_BAL_GRP_OBJ    POID [0] 0.0.0.1 /balance_group 318231 0
1     PIN_FLD_BILLINFO_ID     STR [0] "Bill Unit(1)"
1     PIN_FLD_BILLING_SEGMENT   ENUM [0] 0
1     PIN_FLD_BILLING_STATE   ENUM [0] 0
1     PIN_FLD_BILLING_STATUS   ENUM [0] 0
1     PIN_FLD_BILLING_STATUS_FLAGS    INT [0] 0
1     PIN_FLD_BILL_ACTGCYCLES_LEFT    INT [0] 1
1     PIN_FLD_BILL_OBJ       POID [0] 0.0.0.1 /bill 468912 0
1     PIN_FLD_BILL_WHEN       INT [0] 1
1     PIN_FLD_BUSINESS_PROFILE_OBJ   POID [0] 0.0.0.0  0 0
1     PIN_FLD_COLLECTION_DATE TSTAMP [0] (1449100800) Thu Dec 03 00:00:00 2015
1     PIN_FLD_CURRENCY        INT [0] 124
1     PIN_FLD_CURRENCY_SECONDARY    INT [0] 0
1     PIN_FLD_EFFECTIVE_T  TSTAMP [0] (1417590365) Wed Dec 03 07:06:05 2014
1     PIN_FLD_EVENT_POID_LIST    STR [0] ""
1     PIN_FLD_EXEMPT_FROM_COLLECTIONS    INT [0] 0
1     PIN_FLD_FUTURE_BILL_T TSTAMP [0] (1454457600) Wed Feb 03 00:00:00 2016
1     PIN_FLD_LAST_BILL_OBJ   POID [0] 0.0.0.1 /bill 469776 0
1     PIN_FLD_LAST_BILL_T  TSTAMP [0] (1449100800) Thu Dec 03 00:00:00 2015
1     PIN_FLD_NEXT_BILL_OBJ   POID [0] 0.0.0.1 /bill 464723 0
1     PIN_FLD_NEXT_BILL_T  TSTAMP [0] (1451779200) Sun Jan 03 00:00:00 2016
1     PIN_FLD_NUM_SUPPRESSED_CYCLES    INT [0] 0
1     PIN_FLD_OBJECT_CACHE_TYPE   ENUM [0] 0
1     PIN_FLD_PARENT_BILLINFO_OBJ   POID [0] 0.0.0.0  0 0
1     PIN_FLD_PARENT_FLAGS    INT [0] 0
1     PIN_FLD_PAYINFO_OBJ    POID [0] 0.0.0.1 /payinfo/invoice 318999 0
1     PIN_FLD_PAYMENT_EVENT_OBJ   POID [0] 0.0.0.1 /event/billing/payment -1 0
1     PIN_FLD_PAY_TYPE       ENUM [0] 10001
1     PIN_FLD_PENDING_RECV DECIMAL [0] 3187.44
1     PIN_FLD_SCENARIO_OBJ   POID [0] 0.0.0.0  0 0
1     PIN_FLD_SPONSOREE_FLAGS    INT [0] 0
1     PIN_FLD_SPONSOR_FLAGS    INT [0] 0
1     PIN_FLD_STATUS         ENUM [0] 10100
1     PIN_FLD_STATUS_FLAGS    INT [0] 8
1     PIN_FLD_SUPPRESSION_CYCLES_LEFT    INT [0] 0

Testnap is Your Friend

$
0
0

Jessica Boepple - SSG Oracle BRMWhen it comes to testing in BRM, you’ve got a friend in testnap. Testnap is a versatile command-line utility that you can use to test your BRM applications and execute opcodes. With testnap, you can perform the following tasks:

  • Create input flists
  • Test the validity of input flists
  • Save input and output flists
  • Execute opcodes
  • View return flists
  • Create, view, modify and delete objects and their fields
  • Open, commit and abort transactions
  • And more!

Benefits of using testnap

Testnap is an incredibly powerful tool. As seen above, it has many possible excellent uses that make testing easier and more robust. Some of testnap’s most helpful attributes are listed below.

  • With its informative error messages, testnap tells you where the problem occurs, allowing you to easily pinpoint and correct errors.
  • Testnap also logs more detailed explanations to a pinlog about why an action might be invalid or why an error is occurring.
  • Testnap allows you to execute a wide variety of tests all in one place, making it a wonderfully convenient tool!

Where to use testnap

Testnap interacts with the server by establishing a PCM connection with the Connection Manager (CM) and executing PCM opcodes using that connection. To connect to your BRM system, the testnap utility requires a pin.conf configuration file. You may choose to use the CM’s pin.conf file, running testnap in BRM_Home/sys/cm, or you may choose to run testnap from another directory that contains a suitable pin.conf file, such as BRM_Home/sys/test.

Creating a testnap script

The basic format of a testnap script looks like this:

[1] r << XXX 1
[2] 0 PIN_FLD_POID POID [0] 0.0.0.1 /account 56375 13
[3] XXX
[4] d 1
[5] robj 1

The line numbers in [brackets] above are just shown for clarity and are NOT part of the testnap script.

  1. Line 1 says to read everything that follows into buffer #1, until ‘XXX’ is reached.
  2. Line 2 contains the flist (in this case, it is a simple one-line flist, but typically this will contain a larger multi-line flist).
  3. The ‘XXX’ in Line 3 indicates the end of the text to put into the buffer.
  4. Line 4 just says to display the flist we have loaded into buffer 1.
  5. Line 5 says to execute the “READ_OBJ” opcode on buffer 1. This line could be replaced with a call to some opcode. For example:
    xop PCM_OP_ACT_USAGE 0 1
    This tells the system to execute opcode PCM_OP_ACT_USAGE with flag=0 on buffer 1.

4 Real-World Applications of testnap

Specific examples of scenarios in which you could use testnap are listed below. This is by no means a complete list, but it should help give you an idea of just how flexible and worthwhile testnap can be!

  1. Verifying the database connection and that all BRM components successfully started
    It’s always a good practice to check and make sure that you can run testnap after connecting to the database and bringing up all of the necessary components of BRM. If you accidentally missed starting a BRM component or made a mistake when editing a configuration file, testnap will not start successfully. If testnap starts, that’s a good indication that your system is healthy.
  2. Verifying validity of an input flist for an opcode you want to build and use in a BRM application
    Sometimes the required fields listed for input flists in the Oracle documentation don’t always line up with what BRM actually requires in the flist. This can result in unexpected errors and hours of frustration. By creating an flist in testnap and executing the opcode you want to test, you can verify that the opcode will work properly with your input.
  3. Executing a read object
    Reading an object in testnap allows you to quickly verify that any modifications, charges, or updated information associated with the object appear as expected. This is very versatile and can be used for testing account updates, billing or anything you need to check on when testing.
  4. Using an input file to perform simple actions
    Testnap allows you to create a file to store an input flist that can be passed to an opcode. For example, you can use an input file to load usage for test accounts. You can quickly verify that usage charges are appearing as expected by executing a read object on the test account and checking the output, without even needing to leave testnap! Using input files to store longer input flists is a great way to test opcodes with more complex input requirements.

As you can see, testnap is a powerful tool that can assist with validating BRM connectivity, development prototyping, testing new functionality, and more, all without writing any code! Utilize testnap to make your life easier and enhance the robustness of your tests and custom creations in BRM.

How to Be a Time Traveler Using pin_virtual_time

$
0
0

Jessica Boepple - SSG Oracle BRMHave you ever wanted to travel through time? No need for a flux capacitor! Oracle BRM provides that exciting opportunity with the use of pin_virtual_time, which allows you to adjust the time of your BRM instance without changing your system time. Time traveling is not always appropriate or necessary, so let’s briefly go over some examples of when to use pin_virtual_time and how it should be used.

When to use pin_virtual_time

Pin_virtual_time should always be used in a testing environment. Common situations for using pin_virtual_time include but are not limited to:

  • Testing if accounts are billed correctly by advancing the date to the next billing cycle, then running billing and generating an invoice. (This example is covered below in more detail.)
  • Testing if a time-based discount takes effect correctly. For example, if you have a deal that includes 90 days of free email service, advance the date more than 90 days to see if the account starts being charged for the service.
  • Testing if folds are working correctly. For example, if a product includes 10 free hours of Internet service per month, advance the date by a month or more to ensure that unused free hours do not carry over.

Proper use of pin_virtual_time

  • Because the pin_virtual_time utility works only on a single environment, you should set up a test BRM system on just one server.
  • Time should always be adjusted forward and never into the past.
  • Only move time forward as far as necessary. For example, if you want to test a monthly billing charge, it is only necessary to move time forward by one month from the time of the account creation.
  • To test custom client applications that are connected to the CM, you can use PCM_OP_GET_PIN_VIRTUAL_TIME to get the virtual time that is set by pin_virtual_time.

Words of caution

  • Never use pin_virtual_time in a production environment.
  • Time should not be adjusted forward farther than necessary.
  • It’s possible to run pin_virtual_time on a system that is distributed across multiple computers, but you must carefully coordinate time changes on all the computers that are a part of the system. Therefore, it is more prudent to set up a test BRM system on a single computer or virtual machine.
  • You may be wondering: “Can I travel back in time?” You can, but traveling back in time is NOT recommended for the following reasons:
    1. Moving time backwards can cause severe data corruption.
    2. Unless you’re rebuilding BRM from scratch, no situation requires traveling back in time.
    3. Setting time back can result in events occurring out of order, which leads to unexpected complications with tests that are extremely difficult or impossible to resolve. Moving time backwards leads to data corruption and unexpected behavior in multiple areas of BRM, including pricing, which can taint your test environment.

Since the safest approach is to always know how your system behaves and why, you should always avoid moving pin_virtual_time into the past.

An example of how to use pin_virtual_time

In this example, we’ll time travel ahead one month and run billing to verify that all charges associated with a test plan appear on the test customer’s invoice. Pin_virtual_time can only be run from a directory with a valid pin.conf where the pointer is not commented out. Any application pin.conf file may contain the entry as well. To enable adjusting pin_virtual_time, uncomment the following line in the pin.conf file where pin_virtual_time is being run:

- - pin_virtual_time ${PIN_HOME}/lib/pin_virtual_time_file

The following out-of-the-box directories contain pin.conf files that can be updated to enable time travel:

  • CM:  $PIN_HOME/sys/cm/
  • CMMP:  $PIN_HOME/sys/cmmp/
  • TEST: $PIN_HOME/sys/test/
  • Billing Applications:  $PIN_HOME/apps/pin_billd/
  • Invoicing:  $PIN_HOME/apps/pin_inv/

Always stop and restart BRM after updating the pin.conf files to allow changes to take effect. Once the configuration files have been updated, follow the steps below.

  1. Set up a test scenario for the customer(s) and plan(s) to be verified with billing.
  2. In your testing environment, enter the pin_virtual_time command to see the current virtual time displayed. We will be moving time forward by one month from this time.
  3. To set pin_virtual_time to a specified time where the clock will continue to run, use mode 2 and set time forward one month from the current virtual time using this command:

pin_virtual_time -m 2 MMDDHHmmYYYY

a. You should generally always use mode 2 when setting time forward, but descriptions of all of the mode options are listed below for reference.

i. Mode 0 (normal mode) indicates that the system should use the operating system time with no adjustments.

ii. Mode 1 (freeze mode) indicates that the system should move forward to the entered time value, where time is frozen until pin_virtual_time is used again to change the time or mode. (Use this mode only when absolutely necessary, because BRM expects time to be moving.)

iii. Mode 2 (offset mode) sets the time forward to the entered time value and keeps the clock running forward.

4.  Run the pin_bill_day command to bill all customers. (It’s really that easy.) If you would prefer to only bill your test customer, see Alex Bramford’s blog on How to Bill One Account in BRM for Testing.

5.  Inspect the test account using your favorite tool. Any charges associated with the test customer’s plan should now appear.

BRMmart: Making Report Generation Easy

$
0
0

Jessica Boepple - SSG Oracle BRMGetting historical information out of Oracle BRM can be quite a chore. BRM’s complex nature of its object oriented data store makes the process of data extraction and report generation challenging and inefficient. BRMmart provides a solution that makes report generation faster and easier while helping to prevent performance issues that can occur when pulling data out of BRM for reports.

What Is BRMmart?

A system built strictly for reporting purposes that is designed to allow for improved performance, ease of use for reporting and minimal performance impact on the associated transactional system is called a data mart. BRMmart is a data mart that serves as the middle-man between Oracle BRM and end-user reporting and analysis tools. Information in BRMmart is organized as a star schema, which simplifies user understanding of the data, maximizes database query performance for decision support applications and requires less storage for large transactional databases.

Because BRMmart is separate from BRM’s transactional system, you can decrease the time it takes to generate reports, and the transactional system won’t suffer from an impact to performance. Writing reports directly from BRM’s transaction-based database schema can be difficult and can negatively impact system performance, but BRMmart offers an easy to use, efficient alternative.

BRMmart is broken down into three subject areas: Subscribers, Revenue and Usage.

  • Subscribers provides information about customers, including contact information and status changes associated with subscribers’ accounts.
  • Revenue includes all dollar-impacting events and general ledger information.
  • Usage consists of all usage records extracted and aggregated per business rules.

Each subject area has associated dimensions, which provide more detailed information including:

  • Time
  • Accounts
  • Products
  • Deals
  • Plans
  • Services

What Can BRMmart Do?

BRMmart can provide both analytic and operational reporting for the Subscriber, Revenue and Usage subject areas and the associated dimensions of Time, Accounts, Products, Deals, Plans and Services. Some examples of reports you can generate are listed below.

  • Number of New Accounts Created by Day
  • Cancel and Add Trends by Product
  • Monthly Revenue by GL Account
  • Credit Card Charges by Day by Card Type
  • Number of Usage Records loaded by Day/Week/Month
  • Average Speed by Service
  • And many more!

Oracle BRM supports the ability to extend its base objects to allow customized object definitions. In order to allow reporting and analysis using this same extensible paradigm, BRMmart has been designed using core components based off of BRM’s generic out of the box functionality, as well as extended components to accommodate customizations. This setup works with you to gather reporting data on custom additions without making it a chore to access your unique data metrics.

Conclusion

Information in a transactional system such as Oracle BRM is organized to get orders efficiently into the system. It is not designed to facilitate information gathering and reporting. While BRM is a powerful and effective tool, businesses need the ability to quickly and efficiently generate reports that will help them manage their affairs. With BRMmart’s abstracted, normalized set of tables separate from the transactional system, the process of report generation becomes much more efficient and straightforward. Creating a separate reporting environment decreases the time required to get information out of the transactional system and minimizes performance impact on the transactional system, taking the stress out of getting the information you need.

For more information, contact us at info@ssglimited.com.

Creating Custom Fields in Oracle BRM

$
0
0

Jenny Streett SSG BlogAs developers, sometimes we need to represent data that does not fit into any of Oracle BRM’s out-of-the-box fields. The following steps detail how you can easily create your own custom fields and use them in your custom code.

Before you can add custom fields, you need to verify that the data dictionary is writeable:

  • Open the DM pin.conf file (PIN_HOME/sys/dm_oracle/pin.conf) in a text editor.
  • Set the write_enable_fields entry to 1:
    dm dd_write_enable_fields 1

Creating Custom Fields in Developer Center

  1. Open the Storable Class Editor in Developer Center and select File -> New -> Field.
  2. Give your field a descriptive name and select the data type from the drop-down menu.
    Note: Starting your field name with something other than PIN_FLD will make upgrading BRM easier down the line, because you can easily tell which fields you added.
  3. In the Description field, write a note about the purpose of your field.
  4. Developer Center will automatically generate a unique field id, but you can change it.
    Note: BRM reserves ids up to 10000
Custom Fields in Oracle BRM

Click to enlarge

BRM does not provide an easy way to remove fields from the data dictionary, so check to see that everything is correct before clicking OK. If you do need to make changes after the fact, check out this article for an explanation of how to remove fields from the data dictionary: Using SQL to Manipulate BRM’s Data Dictionary.

Using Custom Fields in C Applications

Your custom field is now in the BRM Data Dictionary, but you’ll need to take a few extra steps to get BRM to recognize them in your code:

  1. In Developer Center, select File -> Generate Custom Fields Source and select a directory for your new source files.
  2. Run the parse_custom_ops_fields Perl script (PIN_HOME/bin/parse_custom_ops_fields.pl) on the header file generated by Developer Center:
    parse_custom_ops_fields -L language -I input -O output
    For C and C++ applications, the language should be pcmc. The input is the name of your header file and the output is the name and location of the output of the script.
  3. Add an entry in the pin.conf files for the CM and all applications that will need access your custom field, including testnap:
    – – ops_fields_extension_file my_custom_fields
    where my_custom_fields is the name and location of the output file from the parse_custom_ops_fields script.
    Note: BRM will only read the first ops_fields_extension_file entry it finds, so all custom fields, storable classes, and opcodes need to be included in the same file.
  4. Include the header file in any code that uses your custom fields.

 Using Custom Fields in Java Applications

To use your custom fields in Java applications:

  1. If you haven’t already done so, follow Step 1 above to generate custom fields source in Java. This will create .java files for all of your custom fields and a file called InfranetPropertiesAdditions.properties.
  2. Copy the contents of the .properties file generated by Developer Center into the Infranet.properties file for your application.
  3. Compile the .java source files generated by Developer Center.
  4. Jar the compiled classes. In Eclipse, right-click on the package in the Project Explorer and select Export -> Java -> JAR file. From the command line, type:
    jar cvf filename.jar *.class
  1. Add the JAR file to the classpath in your Java project.

Cloning BRM and Pipeline Environments for Development and Testing

$
0
0

Alex Bramford - SSG Blog - Oracle BRM and Informatica Data ManagementMany times when working on multiple projects for a client, it is necessary to be able to reproduce a BRM/Pipeline environment containing customizations and test data. Multiple environments can be used to test different branches of code, various releases or different levels of staging. This blog series uses Linux command line tools and Oracle Data Pump to clone a BRM/Pipeline schema and BRM executable code base, which can subsequently be used for development or testing.

Caveat

  • The steps presented here require system level access on the database server, and sufficient privileges granted to the database users. Therefore it may not be possible to implement these in larger development environments where access to database servers is typically severely limited.
  • These steps are ideal for quickly adding additional “sandbox” type development environments or testing environments. However, a more rigorous methodology should always be followed when deploying into production environments.

Prerequisites

  • Before embarking on the cloning exercise, ensure target devices have sufficient disk space for both the $PIN_HOME tree and the log files.
    • BRM docs recommend at least 300 MB disk space for the BRM Server and at least 2 GB disk space per 10,000 customers per year for the database.
    • Sizing will depend on current number of customers and expected number of additional customers. Keep in mind that some test environments will need more space than others.
  • Given that good development environments often have log levels set to debug for the Connection Managers (CMs) and Data Managers (DMs), it may be good idea to locate the log files under $PIN_LOG_DIR on a separate storage device on the target machine.
  • Depending on the size of your source database, you may wish to check that the source machine has sufficient space to stage the dump files to which the schemas will be exported.

Source Machine

  • The df command can be used to report disk space used to ensure there is sufficient storage available on the source and target file system for the database export and database import.  (The –h switch shows the results in human readable format, i.e. in Gb instead of blocks.)
$df -h /export/home/oracle/tmp

Filesystem             size   used  avail capacity  Mounted on

 /dev/dsk/storage       23G    16G   6.6G    71%    /export/home


Target machine

$df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_host-LogVol01
                      2.0G  522M  1.4G  28% /
tmpfs                 2.0G  1.2G  782M  61% /dev/shm
/dev/mapper/asr_raid0p1
                      485M   94M  366M  21% /boot
/dev/mapper/vg_host-LogVol06
                      197G   87G  100G  47% /data1
/dev/mapper/vg_host-LogVol03
                       20G  6.4G   12G  35% /home
/dev/mapper/vg_host-LogVol02
                       20G  4.7G   14G  26% /opt
/dev/mapper/vg_host-LogVol05
                      5.8G  1.8G  3.8G  32% /usr
/dev/mapper/vg_host-LogVol04
                      9.7G  531M  8.7G   6% /var

Coming up:

While the end goal may be accomplished by cloning the code and database together, at times only one of the components may be required – therefore we will present these operations in two steps as follows:

1. Cloning the BRM Code

There are times when a new BRM code base is needed but this may not always have to be done in conjunction with a database clone.

  • For example, if a new code branch is required and the code does not involve any database changes then a DB refresh is not necessary
  • Another use is if the code base is somehow corrupted (for instance, with bad code) and a new environment is needed quickly.

2. Cloning the BRM Database

Likewise, there is a case for only cloning the BRM Database when a new code base clone is not needed.
  • Database is corrupted. This can happen with a bad deployment but also with a pin_virtual_time mishap.
  • Pin_virtual_time  must never be moved backwards.  Therefore, if reverting to a known (prior) date is required, a clean database should be restored.

Enabling Level 2 Payments in BRM

$
0
0

Alex Bramford - SSG Blog - Oracle BRM and Informatica Data ManagementIn the first blog of the serieswe discussed the financial benefits of providing Level 2 data in requests sent to the credit card processor. In the second blog post, we went on to show how it is possible to configure the credit card payment type to exchange this information with the payment processor. However, changes made to the configuration of the “Out of the Box” (OOB) core credit card payment type may be affected by future releases of the BRM software.  In the third post of this series, we isolate these changes from the standard BRM implementation by building a new payment type for sending Level 2 credit card data. In this example, we are adding an order ID and purchase description as sample level 2 data elements.

Limitations of the default BRM credit card payment type

  • The pin_collect utility will not send and receive Level 2 data when making credit card payment requests.
  • Specifically: PCM_OP_PYMT_CHARGE_CC drops the PIN_FLD_VENDOR_RESULTS + PIN_FLD_TRANSACTION_ID + PIN_FLD_AUTH_DATE when it returns.
  • PCM_OP_PYMT_CHARGE_CC keeps only PIN_FLD_RESULT.
  • PCM_OP_PYMT_CHARGE_CC cannot be used to return custom fields from the DM.

Benefits of creating a new payment type to support level 2 data

  • The BRM OOB credit card payment type remains unchanged.
  • The new payment type can be configured to support level 2 data and enhanced vendor response code support.
  • Ability to exchange and store Level 2 data when running pin_collect for BRM initiated credit-card or debit-card payments.

Approach

  1. Create a new payment type following the standard BRM method of creating a /config/payment object (payment ID used is 10106)
  2. Configure /config/ach to route Level 2 payment requests to the payment processor.
  3. Write a custom opcode that is called when doing collection activities for the new payment type (new opcode numbers 14000, 14001, 14002)
  4. Write a custom data manager (DM) to accept the credit card information (including the level 2 data) and communicate with the payment processor
  5. Configure /config/payment to call the custom opcode when processing the CHARGE operation for credit card payments.

Development Artifacts

  • Custom types: a number of new BRM object types must be defined to send the purchase description and the order ID in the payment request Level 2 data:

o   /payinfo/cclevel2

o   /event/billing/validate/cclevel2

o   /event/billing/charge/cclevel2

o   /event/billing/payment/cclevel2

o   /event/billing/refund/cclevel2

  • Configuration:

o   The /config/payment object must be configured to call custom opcodes for the validation, charge, and recover operations.

o   The /config/payment object must be configured to generate the new validation-, charge-, payment- and refund- events that, along with the new opcodes, constitute the new payment type.

o   /config/ach must be configured to route payments to the custom DM (e.g. 0.0.10.4 /payment/cclevel2).

  • Development:

o   Opcodes must be written to implement the validation, charge, and recover operations.

o   Data Manager must be written to accept and return Level 2 data to the charge operation when connecting to the payment processor.

Custom types

  • The following snippets show the salient fields of the new billing events that will be defined:
  • /event/billing/charge/cclevel2
0 PIN_FLD_CC_INFO                  ARRAY [0] allocated 8, used 8
1     PIN_FLD_DESC                   STR [0] "Monthly Fees"
1     PIN_FLD_DEBIT_EXP              STR [0] "XXXX"
1     PIN_FLD_DEBIT_NUM              STR [0] "XXXX"
1     PIN_FLD_ORDER_ID                    STR [0] "12345"

  • /event/billing/payment/cclevel2
0 PIN_FLD_CC_INFO                  ARRAY [0] allocated 8, used 8
1     PIN_FLD_DESC                   STR [0] "Monthly Fees"
1     PIN_FLD_DEBIT_EXP              STR [0] "XXXX"
1     PIN_FLD_DEBIT_NUM              STR [0] "XXXX"
1     PIN_FLD_ORDER_ID                    STR [0] "12345"

 

  • /event/billing/refund/cclevel2
0 PIN_FLD_CC_INFO                  ARRAY [0] allocated 8, used 8
1     PIN_FLD_DESC                   STR [0] "Monthly Fees"
1     PIN_FLD_DEBIT_EXP              STR [0] "XXXX"
1     PIN_FLD_DEBIT_NUM              STR [0] "XXXX"
1     PIN_FLD_ORDER_ID                    STR [0] "12345"

  • /event/billing/validate/cclevel2
0 PIN_FLD_CC_INFO                  ARRAY [0] allocated 8, used 8
1     PIN_FLD_ADDRESS                STR [0] "TestCC-1"
1     PIN_FLD_CITY                   STR [0] "Raleigh"
1     PIN_FLD_COUNTRY                STR [0] "USA"
1     PIN_FLD_DEBIT_EXP              STR [0] "XXXX"
1     PIN_FLD_DEBIT_NUM              STR [0] "XXXX"
1     PIN_FLD_NAME                   STR [0] "TestCC-1"
1     PIN_FLD_STATE                  STR [0] "NC"
1     PIN_FLD_ZIP                    STR [0] "75288"
  • /payinfo/cclevel2
0 PIN_FLD_CC_INFO                  ARRAY [0] allocated 8, used 8
1     PIN_FLD_ADDRESS                STR [0] "TestCC-1"
1     PIN_FLD_CITY                   STR [0] "Raleigh"
1     PIN_FLD_COUNTRY                STR [0] "USA"
1     PIN_FLD_DEBIT_EXP              STR [0] "XXXX"
1     PIN_FLD_DEBIT_NUM              STR [0] "XXXX"
1     PIN_FLD_NAME                   STR [0] "TestCC-1"
1     PIN_FLD_STATE                  STR [0] "NC"
1     PIN_FLD_ZIP                    STR [0] "75288"

  • /event/audit/customer/payinfo/cclevel2

Configuration

  • The following snippets show values for the payment type configuration object and the ACH configuration object respectively:
/config/payment
0 PIN_FLD_POID           POID [0] 0.0.0.1 /config/payment 200 6
0 PIN_FLD_PAY_TYPES                ARRAY [10106] allocated 4, used 4
1     PIN_FLD_PAYINFO_TYPE           STR [0] "/payinfo/cclevel2"
1     PIN_FLD_PAYMENT_EVENT_TYPE     STR [0] "/event/billing/payment/cclevel2"
1     PIN_FLD_REFUND_EVENT_TYPE      STR [0] "/event/billing/refund/cclevel2"
1     PIN_FLD_OPCODES              ARRAY [0] allocated 4, used 4
2         PIN_FLD_EVENT_TYPE         STR [0] "/event/billing/validate/cclevel2"
2         PIN_FLD_FLAGS              INT [0] 0
2         PIN_FLD_NAME               STR [0] "CUSTOM_OP_BILL_VALIDATE_CC"
2         PIN_FLD_OPCODE             INT [0] 14000
1     PIN_FLD_OPCODES              ARRAY [1] allocated 4, used 4
2         PIN_FLD_EVENT_TYPE         STR [0] "/event/billing/charge/cclevel2"
2         PIN_FLD_FLAGS              INT [0] 0
2         PIN_FLD_NAME               STR [0] "CUSTOM_OP_PYMT_CHARGE"
2         PIN_FLD_OPCODE             INT [0] 14001
1     PIN_FLD_OPCODES              ARRAY [2] allocated 4, used 4
2         PIN_FLD_EVENT_TYPE         STR [0] ""
2         PIN_FLD_FLAGS              INT [0] 0
2         PIN_FLD_NAME               STR [0] "CUSTOM_OP_BILL_RECOVER_CC"
2         PIN_FLD_OPCODE             INT [0] 14002

/config/ach
0 PIN_FLD_POID                      POID [0] 0.0.0.1 /config/ach 1002305 0
.
.
0 PIN_FLD_ACH_INFO                 ARRAY [0] allocated 4, used 4
1     PIN_FLD_CHANNEL_ID             INT [0] 20
1     PIN_FLD_MERCHANT               STR [0] "Level2Processor"
1     PIN_FLD_NAME                   STR [0] "Level2Processor"
1     PIN_FLD_POID_VAL              POID [0] 0.0.10.4 /payment/cclevel2 -1 0

Opcodes

  • The pin_collect utility calls PCM_OP_PYMT_CHARGE, which in turn calls the custom charge opcode specified on the payment type configuration object.
  • The new opcode, CUSTOM_OP_PYMT_CHARGE, must be written to process the payment request by calling the custom DM with the Level 2 data.

Data Manager

Additional Activities  (not within scope of this document)
  • Customer FM policy opcodes should be updated to accommodate the new payinfo type.
  • Payment FM policy opcodes should be updated to accommodate the new payment and refund event types.

 

Call sequence – pin_collect

The following shows the pin_collect call stack, highlighting in order of invocation the key  points in the application, configurations, opcodes, and function calls:

  1. Run the payment process for Level 2 credit card accounts against the payment processor connected to the new DM.
    • pin_collect -pay_type 10106 -vendor level2processor -active -verbose -report
  1. Determine the data manager to use
    • Resolved using pin_collect command line option “-vendor Level2Processor”: lookup on /config/ach determines the routing POID to the payment processor interface
0 PIN_FLD_POID                      POID [0] 0.0.0.1 /config/ach 1002305 0
.
.
0 PIN_FLD_ACH_INFO                 ARRAY [2] allocated 4, used 4
1     PIN_FLD_CHANNEL_ID             INT [0] 20
1     PIN_FLD_MERCHANT               STR [0] "level2processor"
1     PIN_FLD_NAME                   STR [0] "level2processor"
1     PIN_FLD_POID_VAL              POID [0] 0.0.10.4 /payment/level2processor -1 0
  1. Determine the charge opcode to use
    • Lookup on /config/payment determines the custom charge opcode (CUSTOM_OP_PYMT_CHARGE) to call for payment type 10106:
0 PIN_FLD_PAY_TYPES                ARRAY [10106] allocated 4, used 4
1     PIN_FLD_PAYINFO_TYPE           STR [0] "/payinfo/cclevel2"
1     PIN_FLD_PAYMENT_EVENT_TYPE     STR [0] "/event/billing/payment/cclevel2"
1     PIN_FLD_REFUND_EVENT_TYPE      STR [0] "/event/billing/refund/cclevel2"
1     PIN_FLD_OPCODES              ARRAY [0] allocated 4, used 4
2         PIN_FLD_EVENT_TYPE         STR [0] "/event/billing/validate/cclevel2"
2         PIN_FLD_FLAGS              INT [0] 0
2         PIN_FLD_NAME               STR [0] "CUSTOM_OP_PYMT_VALIDATE_CC"
2         PIN_FLD_OPCODE             INT [0] 14000
1     PIN_FLD_OPCODES              ARRAY [1] allocated 4, used 4
2         PIN_FLD_EVENT_TYPE         STR [0] "/event/billing/charge/cclevel2"
2         PIN_FLD_FLAGS              INT [0] 0
2         PIN_FLD_NAME               STR [0] "CUSTOM_OP_PYMT_CHARGE"
2         PIN_FLD_OPCODE             INT [0] 14001
1     PIN_FLD_OPCODES              ARRAY [2] allocated 4, used 4
2         PIN_FLD_EVENT_TYPE         STR [0] ""
2         PIN_FLD_FLAGS              INT [0] 0
2         PIN_FLD_NAME               STR [0] "CUSTOM_OP_PYMT_RECOVER_CC"
2         PIN_FLD_OPCODE             INT [0] 14002

  1. PCM_OP_PYMT_COLLECT
  2. CUSTOM_OP_PYMT_CHARGE
  3. fm_custom_pymt_charge_opcode.c
  4. PCM_OP_WRITE_FLDS (0.0.10.4) – write payment attributes with Level 2 data to the payment processor at BRM database 0.0.10.4
  5. dm_payments_custom.c

o   Enter the DM at the standard entry point, dm_if_process_op ()

dm_if_process_op(struct dm_sm_info *dsip,
                        u_int             pcm_op,
                        u_int             pcm_flags,
                        pin_flist_t       *in_flistp,
                        pin_flist_t       **out_flistpp,
                        pin_errbuf_t      *ebufp)
.
.
case PCM_OP_WRITE_FLDS:
      invoke_payment_processor(pcm_op, in_flistp, out_flistpp, ebufp);
      break;
      ;;
  1. DM calls the payment processor

o   Populate the payment request with fields from the input flist.

o   Call the (external) payment processor API

o   Return the payment processor call results on the output flist.

  1. dm_payments_custom.c
  2. fm_custom_pymt_charge_opcode.c
  3. PCM_OP_PYMT_CHARGE
  4. pin_collect

Interaction between opcode and DM

  • The custom opcode and custom DM work together to exchange Level 2 payment data with the payment processor.
  • The custom opcode populates the payment request with Level 2 data to send to the DM, and returns to pin_collect the custom data returned by the DM.

DM search output flist

  • The DM should populate the flist returned to the calling opcode, CUSTOM_OP_PYMT_CHARGE, to include custom results as follows:
D Wed Mar 12 12:00:00 2023  pindev  cm:8959  cm_child.c(115):4829 1:pindev:pin_collect:8952:-143840352:113:1366511900:3
      opcode db: 655364(0.0.10.4), context db: 0(0.0.0.0), trans_state: 5, opcode: 7
  D Wed Mar 12 12:00:00 2023  pindev  cm:8959  pcpst.c(82):404 1:pindev:pin_collect:8952:-143840352:113:1366511900:3
      connect to host=pindev, port=13020 OK
  D Wed Mar 12 12:00:00 2023  pindev  cm:8959  fm_custom_pymt_charge_opcode.c:568 1:pindev:pin_collect:8952:-143840352:113:1366511900:3
      dm search output flist
# number of field entries allocated 20, used 4
0 PIN_FLD_POID           POID [0] 0.0.10.4 /_cc_db -1 0
0 PIN_FLD_BATCH_INFO    ARRAY [0] allocated 20, used 2
1     PIN_FLD_BATCH_ID        STR [0] "T1,98"
1     PIN_FLD_RESULT         ENUM [0] 0
0 PIN_FLD_RESULTS       ARRAY [0] allocated 20, used 2
1     PIN_FLD_RESULT         ENUM [0] 0
1     PIN_FLD_CHARGES       ARRAY [0] allocated 20, used 4
2         PIN_FLD_RESULT         ENUM [0] 0
2         PIN_FLD_TRANS_ID        STR [0] "T1,98,0"
2         PIN_FLD_INHERITED_INFO SUBSTRUCT [0] allocated 20, used 1
3             PIN_FLD_CC_INFO       ARRAY [0] allocated 20, used 8
4                 PIN_FLD_ADDRESS         STR [0] "TestCC-1"
4                 PIN_FLD_CITY            STR [0] "Raleigh"
4                 PIN_FLD_COUNTRY         STR [0] "USA"
4                 PIN_FLD_DEBIT_EXP       STR [0] "XXXX"
4                 PIN_FLD_DEBIT_NUM       STR [0] "XXXX"
4                 PIN_FLD_NAME            STR [0] "TestCC-1"
4                 PIN_FLD_STATE           STR [0] "NC"
4                 PIN_FLD_ZIP             STR [0] "80087"
4                 PIN_FLD_DESC            STR [0] "Monthly Fees"
4                 PIN_FLD_ORDER_ID        STR [0] "12345"
2         PIN_FLD_PAYMENT      SUBSTRUCT [0] allocated 20, used 3
3             PIN_FLD_DESCR           STR [0] "Payment Description"
3             PIN_FLD_PAYMENT_RESULT   ENUM [0] 0
3             PIN_FLD_INHERITED_INFO SUBSTRUCT [0] allocated 20, used 1
4                 PIN_FLD_CC_INFO       ARRAY [0] allocated 20, used 4
5                     PIN_FLD_VENDOR_RESULTS    STR [0] "VC=<vendor_code> SR=<cvv_result> AVS=<avs_result>"
5                     PIN_FLD_RESULT         ENUM [0] 0
5                     PIN_FLD_TRANSACTION_ID       STR [0] ""
5                     PIN_FLD_AUTH_DATE       STR [0] "Sat Apr 20 19:38:21 2013\n"
.
.

fm_custom_pymt_charge_opcode.c result flist

  • After retrieving the results from the DM, the custom opcode should populate the PIN_FLD_CHARGES array on the flist returned to the top-level opcode, PCM_OP_PYMT_COLLECT
D Wed Mar 12 12:00:00 2023  pindev  cm:8959  fm_custom_pymt_charge_opcode.c:136 1:pindev:pin_collect:8952:-143840352:113:1366511900:3
      custom_op_pymt_charge result flist
# number of field entries allocated 20, used 4
0 PIN_FLD_POID           POID [0] 0.0.0.1 /account 81236 0
0 PIN_FLD_CHARGES       ARRAY [0] allocated 20, used 4
1     PIN_FLD_RESULT         ENUM [0] 0
1     PIN_FLD_TRANS_ID        STR [0] "T1,98,0"
1     PIN_FLD_INHERITED_INFO SUBSTRUCT [0] allocated 20, used 1
2         PIN_FLD_CC_INFO       ARRAY [0] allocated 20, used 8
3             PIN_FLD_ADDRESS         STR [0] "TestCC-1"
3             PIN_FLD_CITY            STR [0] "Raleigh"
3             PIN_FLD_COUNTRY         STR [0] "USA"
3             PIN_FLD_DEBIT_EXP       STR [0] "XXXX"
3             PIN_FLD_DEBIT_NUM       STR [0] "XXXX"
3             PIN_FLD_NAME            STR [0] "TestCC-1"
3             PIN_FLD_STATE           STR [0] "NC"
3             PIN_FLD_ZIP             STR [0] "80087"
3             PIN_FLD_DESC            STR [0] "Monthly Fees"
3             PIN_FLD_ORDER_ID        STR [0] "12345"
1     PIN_FLD_PAYMENT      SUBSTRUCT [0] allocated 20, used 3
2         PIN_FLD_PAYMENT_RESULT   ENUM [0] 0
2         PIN_FLD_DESCR           STR [0] "Credit Card Payment"
2         PIN_FLD_INHERITED_INFO SUBSTRUCT [0] allocated 20, used 1
3             PIN_FLD_CC_INFO       ARRAY [0] allocated 20, used 1
4                 PIN_FLD_RESULT         ENUM [0] 0

Payment event

  • The PIN_FLD_CC_INFO on the credit card payment event now contains custom attributes returned from the payment processor.

Conclusion

  • In this mini-series we have discussed not only how to reduce the cost of processing credit card transactions, but we have also covered what should be addressed when implementing any new payment type in BRM.
  • You may wish to review the following posts for related information:

Understanding BRM’s Error Buffer

$
0
0

Matt Coburn SSG BRMIf you have any experience with BRM, you probably have encountered strange error messages related to the error buffer. This blog will take a deeper look at the error buffer and help you decipher what some of those messages mean.

When BRM encounters an error in a PCM opcode or a PIN library, it stores and returns those errors using the pin_errbuf_t data type. Both out-of-the-box and custom applications should implement this type of BRM handling. As BRM users, it is important that we understand the elements of the error buffer and how to read it.

The Error Buffer Structure

BRM defines the error buffer as:

typedef struct {

int32            location;

int32            pin_errclass;

int32            pin_err;

pin_fld_num_t    field;

int32            rec_id;

int32            reserved;

int32            line_no;

char            *filename;

int             facility;

int             msg_id;

int             err_time_sec;

int             err_time_usec;

int             version;

pin_flist_t      *argsp;

pin_errbuf_t    *nextp;

int             reserved2

} pin_errbuf_t;

pin_errbuf_t contains a lot of information, but some of its elements are more useful to developers than others. Let’s take a closer look at the elements of the error buffer and discuss each field.

Important Error Buffer Elements

pin_errThis value contains the actual error returned by the application. If an API call is successfully executed, then this value is set to PIN_ERR_NONE; otherwise the error is logged to this field and the other fields in the error buffer are set. Either way, pin_err will always be set to a value that helps you understand what’s going on in your system. These are some of the most common values that you are liable to encounter:

  • PIN_ERR_NOT_FOUND – BRM couldn’t find a value to assign to pin_err. If you see this value, it doesn’t always mean an error has occurred. For example, some opcodes look for a value in a configuration file. In some cases, even if the configuration file doesn’t contain this information, a default value may be used instead. If a default can be used instead, the opcode can still execute, but you would encounter this error. In that case, it’s necessary to clear the error so that it does not affect other processing.
  • PIN_ERR_NAP_CONNECT_FAILED – This error indicates that an application tried to connect to the CM and was unable to. This could be due to a configuration issue, or BRM might not be up and running.
  • PIN_ERR_NO_SOCKET– BRM tried to create a socket and failed. This often is caused when the socket is already in use or if the maximum number of sockets has been reached. This can often by fixed by verifying that an instance of your application is not already running or by restarting the machine.
  • PIN_ERR_BAD_ARG – A required field in an FList is incorrect.
  • PIN_ERR_MISSING_ARG – A required argument is missing. Check your log files to see which field is missing. If not indicated in the log, check the documentation for the input FList specification to verify that you included all the required fields.
  • PIN_ERR_NULL_PTR – The dreaded NULL pointer. This error occurs when a function could not get a value because it was set to null. Typically, null pointer errors result from programming errors in custom code.
  • There are over 100 error codes that pin_err can be set to. The full list can be viewed in your BRM installation at $BRM_HOME/include/pin_errs.h. 

location – The location sometimes gives you information about where in BRM the error occurred. The possible values are:

  • PIN_ERRLOC_APP – This indicates that the error occurred inside your application and not in BRM.
  • PIN_ERRLOC_FLIST – This indicates that the error occurred while trying to manipulate an FList.
  • PIN_ERRLOC_POID – An error in manipulating a POID.
  • PIN_ERRLOC_PCM – The error occurred within a PCM routine local to the application. Common causes include illegal parameters.
  • PIN_ERRLOC_PCP – This error indicates a problem with connectivity or communication between BRM modules, which can be caused by network connection failures and may indicate a system problem.
  • PIN_ERRLOC_CM – An error within the CM. There may be a problem with unregistered opcodes or an input FList lacking a required POID.
  • PIN_ERRLOC_FM – An error with a facilities module. Most of the time, this error is caused by an input FList that doesn’t meet the requirements for an opcode call within a facilities module.
  • PIN_ERRLOC_DM – An error in a DM. Most commonly, this occurs when an input flist does not meet the required specification or if there is a problem communicating with the underlying data storage system.

err_time_secanderr_time_usec – The time the error occurred in seconds and microseconds respectively.

field – This value contains the field number of the input parameter that caused the error.  This can sometimes be helpful in identifying where the error occurred.

Other Error Buffer Elements

line_no – This value specifies the line number in the BRM source code where the error occurred. For non-custom code, this information contains little value to a non-Oracle developer who does not have access to the BRM source. However, in custom application code where the error is thrown, the line_no can be very helpful.

filename –  The file name associated with the error. This can be helpful for letting you know where to start looking for problems.

facility and msg_id – These elements are used to hold the localization information for BRM implementations with international implementations. They specify the facility module code that the error came from and a msg_id for use in localizing error messages.

*argsp – This is an optional FList pointer which is often left unused. However, it can be set to an arguments FList.

reserved2 – This field is only useful to Oracle developers and can be ignored for our purposes.

Using the Error Buffer in your Customizations

In your custom BRM code, you should check the error buffer after any significant BRM operation, such as an opcode call or the creation of a new context. If an error occurred, an error should be logged and handled as necessary.

The PCM C API offers a macro to log the error buffer. PIN_ERR_LOG_EBUF logs the information stored in the error buffer along with a descriptive message that you choose, which will be appended to the log file.

PIN_ERR_LOG_EBUF( int32 loglevel, char *logmessage, pin_errbuf_t  *errorBuffer);

PIN_ERR_LOG_EBUF has three arguments:

  • loglevel – This indicates the log level for the error message. This integer value is usually set to PIN_ERR_LEVEL ERROR, but can also be set to PIN_ERR_LEVEL_DEBUG or PIN_ERR_LEVEL_WARNING if another level of error reporting is required.
  • *msg – This character string allows for a message to be included within the log alongside the information included in the error buffer. The best practice is to set a descriptive message that will help you quickly identify and correct the problem later.
  • *errorBuffer – The pointer to the returned error buffer.

Here is an example of checking and logging an error from the error buffer:

if ( PIN_ERR_IS_ERR(  &errorBuffer  ) ) {

printf( "BRM error: Unable to Connect to BRM.\n" );

PIN_ERR_LOG_MSG ( PIN_ERR_LEVEL_ERROR, “Unable to connect to BRM.” );

return;       }

The BRM error buffer helps you to identify what caused a problem and locate the source of the issue. It is a valuable tool for debugging and correcting errors in both custom and out-of-the-box implementations of BRM. Proper use of the error buffer helps you verify that your system successfully makes connections, passes all of its checkpoints, and properly receives all the information it requires. If anything went wrong, the error buffer will let you know.

Tokenization and PCI Compliance

$
0
0

Jessica Boepple - SSG Oracle BRMConsidering recent large-scale data breaches, it’s no wonder more and more companies seek to improve their customers’ data security. Tokenization is gaining popularity as a data security method for safeguarding sensitive information ranging from credit card numbers to medical information to loan applications. Tokens improve data security by eliminating the need to directly store raw information.

How does tokenization work?

Tokenization converts credit card numbers (or any piece of sensitive data) into a non-decryptable token, which is later referenced when a payment associated with the token is redeemed. The credit card number itself can be discarded, as the token can be used to retrieve all the necessary information. Here are just a few important benefits of tokenization:

  • Tokenization is more efficient than normal encryption methods because it removes the overhead of going through multiple layers or phases of encryption and decryption when processing a payment.
  • Many applications can operate using tokens as live data, greatly reducing the risk associated with passing sensitive information around.
  • Tokenization eliminates the need to directly store credit card numbers. Instead, only the tokens need to be stored. This helps minimize exposure of sensitive data.

The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems. Multiple methods for creating tokens are available, as no industry standard for tokenization currently exists. However, valid token generation methods must not have any feasible means to reverse tokens back to live data through direct attack, cryptanalysis, side channel analysis, token mapping table exposure or brute force techniques.

Tokenization is one of many processes available to those wanting to improve their data security; but how do companies measure credit card data security?

PCI Compliance

The Payment Card Industry Data Security Standard (PCI DSS) comprises a set of policies and procedures aimed to protect personal cardholder information from misuse. It establishes a minimum set of standards that must be met by organizations that accept, process and/or transmit credit card data. To achieve compliance, companies must adhere to the following 12 practices:

  1. Install and maintain a firewall configuration to protect cardholder data
  2. Do not use vendor-supplied defaults for system passwords and other security parameters
  3. Protect stored cardholder data
  4. Encrypt transmission of cardholder data across open, public networks
  5. Use and regularly update anti-virus software on all systems commonly affected by malware
  6. Develop and maintain secure systems and applications
  7. Restrict access to cardholder data by business need-to-know
  8. Assign a unique ID to each person with computer access
  9. Restrict physical access to cardholder data
  10. Track and monitor all access to network resources and cardholder data
  11. Regularly test security systems and processes
  12. Maintain a policy that addresses information security

Importance of PCI Compliance

PCI compliance is not required by U.S. federal law. So if compliance is optional, why is it important? You may have noticed that all of the PCI requirements are generally sound business practices, especially for companies that come into contact with sensitive information. To illustrate the importance of compliance, we’ll discuss benefits of compliance and negatives of non-compliance below:

Benefits

  • Compliance with the PCI DSS means that your systems are secure, and customers can trust you with their sensitive payment card information.
  • Compliance is an ongoing process, which helps prevent security breaches and theft of payment card data now and in the future.
  • Through your efforts to comply with PCI Security Standards, you’ll likely be better prepared to comply with other regulations as they come along, such as HIPAA, SOX, etc.

Negatives of Non-compliance

  • Compromised data can result in loss of trust from customers, who depend on you to keep their information safe.
  • Even one incident can damage your reputation and ability to conduct business effectively.
  • Possible negative consequences also include:
    • Lawsuits
    • Insurance claims
    • Cancelled accounts
    • Payment card issuer fines
    • Government fines

Good companies strive to build trusting, long-lasting relationships with their customers. Since PCI compliance assures customers that their data will be safe and secure, compliance is a step in the right direction towards data security and happy customers. So how can tokenization help you uphold good data security practices and make compliancy easier?

Tokenization is a step towards PCI compliance

The best way to protect data is to not store it at all. Tokenization removes the headache of being responsible for storing sensitive data—only the tokens associated with credit card numbers need to be stored for reference. By reducing the scope of what can be breached, you are reducing the scope of what needs to be protected. Reducing your scope also means that there will be fewer hoops to jump through when validating compliance, which is something any company can be happy about.

Tokenization in BRM

BRM does not support tokenization out of the box. However, BRM can be modified to use tokens instead of credit card numbers, which can help improve data security. Here at SSG, we’ve customized BRM to use tokenization for many of our clients. Want to learn more about how to implement tokens in BRM? Stay tuned for our next blog, where we’ll go into more detail about the process of customizing BRM to use tokens!

 

Implementing a Type 2 Slowly Changing Dimension Solution in Informatica PowerCenter

$
0
0

Gerald Haynes - SSG Blog - Oracle BRM and Informatica Data ManagementA slowly changing dimension is a common occurrence in data warehousing. In general, this applies to any case where an attribute for a dimension record varies over time. There are three typical solutions for this. In the first, or type 1, the new record replaces the old record and history is lost. The second, or type 2, a new record is added into the customer dimension table and the customer is treated essentially as two people. The third, type 3, the original record is updated to reflect the change. It is considered and implemented as one of the most critical ETL task in tracking the history of data belonging in the dimension. The advantage of a type 2 solution is the ability to accurately retain all historical information in the data warehouse. This blog will focus on how to create a basic type 2 slowly changing dimension with an effective date range in Informatica. First we will take a look at the table structures.

Target Dimension Table

CREATE TABLE CUSTOMERS
(
CUSTOMER_KEY NUMBER,
CUSTOMER_ID NUMBER,
NAME VARCHAR2(40),
STATE VARCHAR2(2),
BEGIN_DATE DATE,
END_DATE DATE)

Here BEGIN_DATE and END_DATE are used to identify history data, while CUSTOMER_ID is used to identify the dimension record and CUSTOMER_KEY is the primary key used to track new dimensions in the target table. A record is considered new if the CUSTOMER_ID does not exist in the target and the END_DATE is null. A record is considered changed if the CUSTOMER_ID exist in the target table and the STATE from the source is different from the STATE matched to the CUSTOMER_ID in the target.

The source for this demonstration will be similar to the target structure, only without the dates or CUSTOMER_KEY.

Source Table

CREATE TABLE CUSTOMERS_SRC
(
CUSTOMER_ID NUMBER,
NAME VARCHAR2(40),
STATE VARCHAR2(2))

To determine the status of the source records as new or changed first create a lookup transformation, lkp_CUSTOMERS on the target table CUSTOMERS_tgt. Add two input ports in_CUSTOMER_ID and in_NULL_DATE. Under the conditions tab check that CUSTOMER_ID = in_CUSTOMER_ID and END_DATE = in_NULL_DATE. This transformation will output the CUSTOMER_KEY, NAME and STATE.

sdc1

Next to the lookup transformation, create an expression transformation, exp_VALIDATE_CHANGE, to determine if the incoming source record is new or changed. Connect the lookup ports CUSTOMER_KEY, NAME, STATE, and connect the ports NAME and STATE from the source qualifier transformation to the exp_VALIDATE_CHANGE. Create two new output ports in the expression transformation named new_FLAG and chg_FLAG. The logic for new_FLAG should check to see if the CUSTOMER_KEY returned from the lookup is null. If it is null then you know the current record, identified by the key, has no CUSTOMER_ID on the target table. Thus the record is new and should be inserted. The logic for chg_FLAG should check to see if the CUSTOMER_KEY returned form the lookup is not null. If so, then it must check the incoming NAME and STATE columns. If the STATE is different, then the record has changed. The incoming record must then be inserted as it is the most current, and the previous record should be updated with an END_DATE. Assign a value of ‘Y’ or ‘N’ to the flags. ‘Y’ if the record has changed, ‘N’ if the record has not changed. In addition to the flags, add another output column called out_SYSDATE, to hold the current date. So far, the mapping should look like this.

sdc2

Next create a router transformation, rtr_UPD_INS_RECORDS. Inside this transformation create three new groups, UPD_CHG, INS_CHG and INS_NEW. These groups will filter the records based on the incoming flags chg_FLAG and new_FLAG. Connect the ports CUSTOMER_KEY, NAME, STATE, CUSTOMER_ID, new_FLAG, chg_FLAG and out_SYSDATE to the router.

For the first group UPD_CHG, filter the records by checking to see if the chg_FLAG = ‘Y’. If it does, allow the records to pass, otherwise filter the records from moving forward. Now only records where the STATE port has changed for the CUSTOMER_ID will be passed. Next to this group, create an update strategy upd_UPD_CHG. Pass the CUSTOMER_KEY, and out_SYSDATE ports from the UPD_CHG group. In the update strategy, rename the out_SYSDATE port to END_DATE. The update strategy expression for this should be DD_UPDATE. Connect the appropriate ports to the target instance. The mapping should look like the below picture.

sdc3

For the next group INS_CHG, we will need the CUSTOMER_ID, NAME and STATE from exp_VALIDATE_CHANGE and the chg_FLAG. Filter the records so that if chg_FLAG is ‘Y’ the records will be passed. Otherwise if the chg_FLAG = ‘N’, do not pass those records forward. This transformation will be used to insert a new row into the target table with the changed information while keeping the history. Create a new update transformation upd_INS_CHG, and connect the NAME, STATE, and CUSTOMER_ID from the group INS_CHG. Connect the out_SYSDATE port from the same group and rename it to BEGIN_DATE, this will be the new start date for the record.  Next create a sequence generator transformation. This will generate the primary key for the dimension, CUSTOMER_KEY. Connect the NEXTVAL port to the upd_INS_CHG transformation and rename it to CUSTOMER_KEY. The update strategy expression should be DD_INSERT. Connect all appropriate ports to the target instance. Now the mapping should look like the below picture.

sdc4

The last group INS_NEW will be used to insert new records into the target table. Filter the incoming records so that when new_FLAG = ‘Y’ the records will be allowed forward. Otherwise when new_FLAG = ‘N’, the records should be rejected. This will allow records forward where the CUSTOMER_ID is not already present on the target table. Next to the group, create an update transformation upd_INS_NEW. Connect the NAME, STATE, CUSTOMER_ID ports from the group INS_NEW. Connect the out_SYSDATE port from the same group and rename it to BEGIN_DATE, this will be the new start date for the record. The update strategy expression should be DD_INSERT. Next connect the NEXTVAL port from the SEQTRANS, and rename it to CUSTOMER_KEY. Connect all appropriate ports to the target instance.

sdc5

For an example, consider the following scenario. We have the following record on the target table.

CUSTOMER_KEY NAME STATE CUSTOMER_ID BEGIN_DATE END_DATE
5 Doe, Jon MI 5 01-01-2014 (null)

 

Jon Doe moved to Texas, and needs a new entry into the dimension table to track the history and have an updated record. The source record is below.

CUSTOMER_ID NAME STATE
5 Doe, Jon TX

 

When this record is processed through the mapping, the lookup will find the existing CUSTOMER_ID. It will return a valid CUSTOMER_KEY = 5. The change flag will be set to ‘Y’ since the state is different in the new record. This will pass the filters from the router groups INS_NEW and UPD_CHG conditions, thus it will insert a new record with a new CUSTOMER_KEY, and also update the previously existing record in the target table with an END_DATE. The result is below.

CUSTOMER_KEY NAME STATE CUSTOMER_ID BEGIN_DATE END_DATE
5 Doe, Jon MI 5 01-01-2014 12-15-2014
20 Doe, Jon TX 5 12-15-2014 (null)

 

In conclusion, a type 2 slowly changing dimension should be used when it is necessary for the data warehouse to track historical changes. Although this will cause the size of the table to grow very fast, it is used about 50% of the time when dealing with cases where the attribute for a record varies over time.

Data security for your company. Seriously. It’s time.

$
0
0

Data Security
Justin Passofaro SSGAnother day, another data security breach. I keep hearing companies say that they need to get smarter and protect their data but the truth is that a majority are moving too slowly or not at all. Every single company houses sensitive data. It might be on-site, on a hosted server or in the cloud but it all needs to be considered for your enterprise data security plan. Anthem just had 80 million social security numbers hacked from their customer database. Just to find the reason for the breach and address it from a technical standpoint will cost an estimated 3.5 million dollars. This figure does not even include the lost revenue and customer badwill resulting from this breach.

The major misconception about data security is that it only needs to exist at the production level and even then the concentration is more around network and database security than the actual data itself. This could not be farther from the truth. It is actually quite scary when you start thinking about how most organizations provide data to their non-production environments for application testing. The most typical practice is to push production data into the lower environments so that application development teams have the most accurate data to test with to ensure all business scenarios are accounted for, which results in the least amount of bugs during the development lifecycle. However, this data is extremely sensitive in almost all cases: social security numbers, credit card numbers, client lists, medical information, benefits, salaries and so on.

So how can the same effective application development occur without the risk? By implementing data masking to your sensitive data across all non-production environments using Informatica Persistent Data Masking and Data Subset.

The Persistent Data Masking and Data Subset software can profile, identify and mask sensitive data all within a codeless user interface that can be used by technical and business users alike to solidify your company’s data security plan.

When you use this software to mask your sensitive data, you are not just scrambling the data so it cannot be read. You are masking it with like-values that are valid application values so that application development teams can still produce enhancements to your systems with the same efficiency and effectiveness as with actual production data.

Data security is more important now than ever before and there is no reason not to be proactive in getting your company protected. SSG has significant and proven experience implementing data security solutions with the Informatica Data Security software suite. Please contact us if you would like to get more information on how we might be able to help you get your data secured today.

Create testnap scripts for BRM using SQL queries

$
0
0

Steve Mansfield SSG
In this blog, we’ll learn how to create testnap scripts using SQL queries. Before we dive in, it might be a good idea to brush up on the following topics unless you’re already an expert:

  1. SQL query understanding, including SQLPLUS
  2. Basic BRM knowledge, including using testnap (editing/executing scripts)

Basic Query

Let’s start with the basics. Suppose you run the following simple query from SQLPLUS, and you have an account POID 50591:

SQL> select first_name, last_name, address, city, state, zip

from account_nameinfo_t

where obj_id0 = 50591;

Your output might look something like this, with each value appearing in its own cell:

John
Doe
500 Main
Dallas
TX
75081

Combining Multiple Fields into a Single Result Cell

What if we wanted the account holder’s first and last name combined into a single individual cell, rather than spanning two cells? We can concatenate multiple fields in our query, using the double-pipe character, as shown below. Note that changes made from the first query are shown in bold.

select first_name || ' ' || last_name, address, city, state, zip

from account_nameinfo_t

where obj_id0 = 50591

The results now look like this:

John Doe
500 Main
Dallas
TX
75081

Note that the two fields for first name and last name from account_nameinfo_t are concatenated, so “John Doe” appears in one cell. The other information is still populated in separate cells.

Adding a Carriage Return (CHR(13)) and Line Feed (CHR(10))

You can also combine first name, last name, and address into a single cell using the carriage return and line feed characters (CHR(13) and CHR(10)), as shown here:

select

first_name || ' ' || last_name || chr(13) || chr(10) || address,

city, state, zip

from account_nameinfo_t

where obj_id0 = 50591;

The results look the same as before in SQLPLUS; however, now the first name, last name, and address are all now part of one single cell (even though they are displayed on multiple lines on the SQLPLUS console).

John Doe
500 Main
Dallas
TX
75081

Creating Testnap Scripts

Extending these concepts, we can create entire testnap scripts with SQL. This is a very basic example of a testnap script:

r << XXX 1

0 PIN_FLD_POID      POID [0] 0.0.0.1 /account 56375 13

XXX

d 1

robj 1

Suppose we wanted to generate the above testnap script, from SQL, for every account in our BRM instance (as a simple example). We could do:

select 'r << XXX 1' || chr(13) || chr(10) ||

'0 PIN_FLD_POID      POID [0] 0.0.0.' || a.poid_db || ' '

|| a.poid_type || ' ' || a.poid_id0 || ' '

|| a.poid_rev  || chr(13) || chr(10) ||

'XXX' || chr(13) || chr(10) ||

'd 1' || chr(13) || chr(10) ||

'robj 1'

from account_t a;

The results may look something like this:

r << XXX 1
0 PIN_FLD_POID      POID [0] 0.0.0.1 /account 1 1
XXX
d 1
robj 1
r << XXX 1
0 PIN_FLD_POID      POID [0] 0.0.0.1 /account 56375 13
XXX
d 1
robj 1
r << XXX 1
 ...

As you can see, we are pulling the poid_db, poid_type, poid_id0 and poid_rev columns from the account_t table as if they were separate individual fields, but we are concatenating them together in order to form the full account POID in line [2] of the testnap script. Whenever we want to insert a hard return inside your text cell, we use chr(13) || chr(10). Using these concepts, you can create more complex testnap scripts as well. This can be a very efficient method to use—if you don’t want to type out the same script over and over, you can generate it once in SQL and reuse it easily!

Mastering business rules that constantly change

$
0
0

John Nettuno SSGOver the years I’ve seen companies large and small struggle with managing their business rules. It seems like teams are always asking the same questions: How many rules do we have? Which version is correct? Where did these calculations come from?

Having ready answers for these questions is key when it comes to effectively managing business rules and making timely decisions. Another important factor in an efficient rules management program is the ability to update the rules at any given time. It is a challenging cycle that never ends because changing the rules often causes additional expense. Without a fully integrated system, rules changing can lead to duplicate labor, rework and the infrastructure expense of managing multiple environments. Not to mention the hard-to-quantify metrics of different answers to the same question and the churn that it causes.

SSG has partnered with Informatica to leverage their business glossary solution to solve this problem. In past engagements, we’ve helped clients improve their rules management within their IT enterprise using the following methodology:

  1. Catalog the current business rules.
  2. Identify duplicate business rules that exist and combine them for consistency and management simplicity.
  3. Provide a web-based user interface where business and IT users can see what rules are available and the logic behind them.

This three-step approach is proven to help solve the problems many face with their business rule management. By creating a catalog of business rules for identification and ongoing maintenance, you’ll spend less time trying to figure which rules are correct, and more time working from rules you know are reliable.

Contact me to learn more about SSG’s business glossary solution.


The key principles to MDM success

$
0
0

Dave Barman SSGAs I ride back to Philadelphia, I can’t help but think of the day’s events at Informatica MDM Day 2015 in New York City. The day was rich with keynotes and panels from customers, partners and Informatica, all sharing a variety of experiences with their Master Data Management implementations. Unsurprisingly, despite coming from various industries, like finance, banking and healthcare, nearly all panelists and keynote speakers had similar experiences during their MDM implementations.

The MDM Day discussions, along with my own experience, compel me to share a few tips. There are three key takeaways to keep in mind with Master Data Management:

Find your executive advocates.

Despite MDM technology being a complicated set of integrated tools, success does not rely on simply executing an implementation plan. As I heard during the panels, it’s clear that the key for successful master data management projects is a close alignment with business leaders who are deeply vested in data. These business visionaries know that data quality is a driving factor of their success. A strong executive sponsor will drive data governance from within your organization, and ultimately, into master data management.

It makes me think of my own journey to MDM.

In the past I’ve been a part of teams with IT leaders that rely on their users too much when it comes to ‘owning’ the data. Data stewardship put the onus of reconciliation and validation of data onto consumers of the data. There wasn’t a real partnership between business and IT to further data quality. Data governance is more than talking about data and information in your organization. It requires action and a process. But how should organizations delegate accountability to the business data owners in order to achieve the organic progression of data governance and master data management?

The MDM Day panelists stressed the critical executive sponsors: the business data owners who have become enlightened to the importance of master data management. These business data gurus are found in each industry, but in various places. For example, a sales organization will value customer, opportunity and lead information. A marketing organization may value demographics, while a healthcare organization will value provider and patient safety initiatives, and so on. When the guru is engaged, they will passionately speak about the data, the issues of the data, and how it is affecting their business. These are the individuals to partner with on your MDM journey.

Start small!

Begin with small teams and small implementation, but find a complex (not too complex!) problem to solve that will create value for the business. Once executives see this value, there will be much more support for the next project.

The Informatica MDM product is complicated and large. It is highly integrated with many pieces of software from Informatica’s software stack. Due to this complexity, even if your team has ETL expertise, almost all panelists at the event recommended partnering with a services vendor to provide expertise for implementation. While this technical implementation is taking place, your staff and team should focus on establishing the process around data governance and identify those key stakeholders who may act as data stewards.

Is your organization ready to begin this journey? Have you found those stakeholders who are business data gurus? Are you ready to begin a relationship with an implementation partner to deliver MDM? Then please contact SSG Limited for more information about how we can assist as your most trusted advisor on these initiatives.

In my next blog, I will be discussing the third takeaway: creating value for the business with MDM.

Custom Fields in Oracle BRM: PODL Files

$
0
0

Jenny Streett SSG BlogIn the previous post, we saw how to create custom fields in Oracle’s Developer Center client. Developer Center is a useful tool for developers who are new to BRM because it provides a simple graphical interface for creating custom fields and storable classes and shows a list of fields that are already in BRM’s data dictionary. More experienced developers often find that it’s faster and more convenient to define fields textually using PODL (Portal Object Definition Language) files. PODL files also have the advantage of allowing developers to manage field and object definitions in source control and migrate them between environments.

Creating Custom Fields Using a PODL File

  1. In your .podl file, specify the field type, name and field number using the following syntax:
    • # prefaces a comment.
    • The field’s id number needs to be unique. Note: BRM reserves field numbers up to 10,000
#============================================
# Field MY_FLD_LANGUAGE
#============================================
STRING MY_FLD_LANGUAGE {
     ID = 10000;
     DESCR = “Primary language of the account owner”;
}
  1. Deploy the .podl file to BRM using the pin_deploy script. You can use pin_deploy create to preserve any existing field or storable class definitions that conflict with your new ones, or pin_deploy replace to override any conflicting definitions:
pin_deploy create my_custom_fields.podl 
  1. If you choose not to use Developer Center to generate the C and Java source files for your custom fields, you will need to create them manually. For C and C++ applications, create a header file that contains a definition for each of your custom fields:
#define MY_FLD_LANGUAGE   PIN_MAKE_FLD(PIN_FLDT_STR, 10000)
  1. For Java applications, create a Java class for each of your custom fields. Each class should extend a base Portal field type. The first parameter in the constructor is the field number, and the second corresponds to the field type:
    • Integer field = 1
    • Enumeration field = 3
    • String field = 5
    • Buffer field = 6
    • Poid field = 7
    • Timestamp field = 8
    • Array field = 9
    • Substruct field = 10
    • Binary String field = 12
    • Decimal field = 14
public class MyFldLanguage extends StrField {
     public MyFldLanguage() { super(10000,5); }
     
     public static synchronized MyFldLanguage getInst() {
          if (me == null) me = new MyFldLanguage();
          return me;
     }

     private static MyFldLanguage me;
}
  1. In the Infranet.properties file for you Java application, specify a property for each of your custom fields and the package name for your custom fields classes:
infranet.custom.field.package=com.ssg.brm.customflds
infranet.custom.field.10000=MY_FLD_LANGUAGE
  1. Once you’ve created your source files, follow the steps from the previous post to include them in your code.

Master Data Management: identify and measure value

$
0
0

Dave Barman SSGThere are a few “must-haves” for a successful Master Data Management initiative. My previous blog touched on two major principles I learned during my day in NYC for the 2015 MDM Conference held by Informatica. The first principle is to find your business data owner who will champion your organization’s data. The second principle is to start small with a complex, achievable project.

The third principle is to deliver value.

Of course we want to achieve value in the projects we commit to finishing. That’s the point! But things can get murky when we are talking about data quality and master data management. We’re not so much delivering a product or service as we are delivering an experience.

How can we quantify an experience? How do we define the value that MDM brings to our business?

Before we worry about quantifying value derived from MDM, it’s important to remark that there is intangible value associated with a successful MDM system:

With MDM, we will be saving hours amongst data users; they can now trust their golden source and no longer need to visit several different systems and reconcile data. There will be more awareness to data when entering it into a system, as there is now a process to identify issues with source system data. There will be more awareness at the data governance level, which will have effects on other parts of the business, such as establishing standards in business rules and metrics. These kinds of benefits are much harder to quantify.

As for ROI quantification, remember my previous post when I listed some industries and verticals that would be keen on certain master data? Let’s lean on our business data guru again. What metrics do they use as a key performance indicator for their business?

These metrics are now the quantifiable value to be used in your ROI model. They will be specific to industry and vertical. These metrics should be identified as critical to the business, and there typically will not be more than 3-4 of them. Calculate these metrics on data before MDM is implemented. Then take these metrics at intervals after establishing the MDM process and implementation. Look at the number 3, 6, 9 and 12 months after implementation, and you will see a measurable difference in the metric which can then be calculated using a dollar amount based on your organization’s business rules for the metric.

For example, think about a customer MDM solution. This solution should provide a golden record for a customer along with enriched data for leads. A potential metric for this may be sales from leads or time from contact to sales. Begin by establishing a baseline for the metric value before executing the MDM project. Once the MDM implementation is live, compare that metric to the baseline values over time. There will be variance in the data that is not attributed to MDM, but the changes in the business itself. Also, remember that the intangible value is still present in your solution.

Now that we have demonstrated value to executive leadership, expanding MDM will make sense. By utilizing these methods, a savvy IT steering committee will be able to get the most value, ROI and impact from their next project.

Need more advice on your MDM initiative? See how SSG can help and feel free to comment or send us your questions.

TAKE or GET? PUT or SET? The BRM Memory Management Conundrum

$
0
0

Randal Blackmon SSGGET or TAKE? SET or PUT? Which one should I use?

That decision comes down to how you want to manage memory in Oracle BRM.

Why would I want to use TAKE?

Let’s say you have an flist, and you want to edit data on a field. TAKE is perfect for you because it allows you to pull a field off of an flist, edit it, and use PUT to add it back on the flist.

I highly recommended using TAKE and PUT together because you don’t have to free the memory. You get the memory from the flist using TAKE, then give it back using PUT.

But wait! If you use TAKE, watch out for these pitfalls.

The control of the memory passes to you when using TAKE, and once you are done with the data, you must free it. If you don’t free it, it will cause data to sit out in memory. The more and more you do this, the more data will just sit in memory. Soon you’ll be out of usable memory because you didn’t free your data!

TAKE will remove the field from the flist. If you still need the flist to call opcodes, I recommend using GET instead of TAKE.

Why use GET?

Want to retrieve the field from an flist but don’t want to remove the field from the flist? Then GET would work for you. GET allows you to retrieve information from an flist, but leaves it intact.

Does GET have pitfalls?

Yes! There are a few.

You must treat data retrieved from a GET as read-only. If you want to edit the field, then TAKE is better suited towards your needs.

Modifying the flist will cause problems if you use GET. If you use GET and modify the flist, the pointer will no longer be valid and the data you get back (if you get anything back) will be garbage. So make sure you use GET before you use any TAKES, SETS, or PUTS.

Why would I use PUT?

Have some memory you don’t want to free? Want to allocate it for something else? Then use PUT to give the memory space to the flist.

PUT is great to use after TAKE. This transfers the memory you took from the flist back to the flist.

Surely PUT doesn’t have pitfalls.

Wrong. Even PUT has its pitfalls.

You cannot use PUT if you don’t allocate the memory. If you try to use it on memory you don’t own, it will cause a segmentation fault. This means you can’t use a GET and then a PUT.

Once you use PUT, you cannot use the data elsewhere. So make sure when you use PUT, it’s the last time you’ll need that data.

If you use PUT on a field that already exists on the flist, it will be overwritten with the data used in the PUT. So be careful and make sure you’re not accidentally overwriting a field.

Why would I want to use SET?

Don’t own the memory but want to put the data on the flist? Then SET is the solution to your problems.

SET allows you to put data on the flist, without having to allocate or own memory. This also allows you to use the data elsewhere, such as setting it on a different flist.

Let me guess—this has pitfalls too.

You’re correct.

Much like PUT, if you try to use SET on a field that already exists on the flist, the field will be overwritten.

What do I recommend?

I recommend using GET over TAKE and SET over PUT. This way you’re not responsible for creating or freeing memory. I would only use TAKE and PUT when I can’t use GET or SET.

RESPEC Receives 2023 Engineering Excellence Award for Outstanding Flood Control Design

$
0
0

Pictured left to right: David Gatterman, Executive Engineer, SSCAFCA, Chris Naidu, Supervisor, RESPEC, and Hugh Floyd, Manager, RESPEC

On April 21, 2023, RESPEC was honored with the 2023 Engineering Excellence Award at the New Mexico American Council of Engineering Companies Annual Awards Gala. The award recognizes RESPEC’s remarkable contribution to engineering for its innovative and sustainable flood control design for the Lisbon Pond in Rio Rancho, New Mexico.  

Flooding has long been a problem for Rio Rancho, where severe storms and flash flooding have historically been devastating and costly. In 2013, the Southern Sandoval County Flood Control Authority (SSCAFCA) identified the Lisbon Pond as critical infrastructure needed to address capacity issues and flooding risks downstream of the project area.  

SSCAFCA selected RESPEC to work on the final design. We worked with SSCAFCA to develop a design that mitigated the risk of flooding and safeguarded the communities and infrastructure in the downstream area.  

Initially, the construction of Lisbon Pond was estimated to cost $1.3 million and required several reinforced structures. SSCAFCA chose RESPEC to finalize the design and manage construction. Instead, RESPEC devised an innovative approach: we designed a side-channel emergency spillway. We used the excess earth from the pond excavation to raise two historically washed-out roadways that often exposed critical utilities in the area. The raised roadway embankments also acted as a secondary ponding area that captured and detained runoff that would have otherwise bypassed Lisbon Pond, thus further preventing flooding of nearby roads and critical utilities. This approach reduced the need for expensive structures and saved the client money, with a final project cost of $1.24 million. 

Construction was completed in May 2021, and the new dam immediately proved its worth. Two high-intensity storms hit the area a week after construction, putting the design to the test. The facilities effectively detained the stormwater runoff, providing relief and protection to the public and nearby infrastructure. 

Receiving the 2023 Engineering Excellence Award highlights RESPEC’s commitment to safety and sustainability. Creative solutions, close coordination with different agencies, and a cost-effective design played a crucial role in the success of the Lisbon Pond Project.  

RESPEC’s outstanding contribution to engineering is a testament to the company’s excellence, innovation, and commitment to solving complex challenges. The award underscores RESPEC’s position as a leading engineering and consulting company and a trusted partner to clients seeking exceptional engineering solutions.  

The post RESPEC Receives 2023 Engineering Excellence Award for Outstanding Flood Control Design appeared first on RESPEC.

Viewing all 59 articles
Browse latest View live