30 Salesforce Developer Interview Questions – 2026

1. A batch job processing 50,000 Account records is hitting governor limits midway. How would you redesign it?

  • When a batch job fails midway due to governor limits, the first step is to identify which limit is being exceeded – CPU time, heap size, DML rows, or SOQL queries. Since it processes 50,000 records, I’d suspect either CPU timeout or too many DML statements. Here’s how I’d redesign:
  • Optimize the batch size: The default batch size is 200, but I might lower it (e.g., 50 or 100) to reduce per-execution workload. Database.executeBatch allows a second parameter for scope size.
  • Minimize SOQL and DML in execute: Ensure all querying is done in start and passed via Database.QueryLocator – that’s already efficient. But if I need additional related data, I’d use maps to bulkify.
  • Avoid CPU-heavy operations: Move any complex calculations to asynchronous processes or use platform features like Flow or Formula fields if possible.
  • Use stateful batch only when necessary: If I need to aggregate data across chunks, I’d use Database.Stateful carefully to avoid heap issues.
  • Break into multiple batches: If the logic is complex, maybe split the job: one batch to prepare data, another to process.
  • Implement error handling: Use Database.SaveResult to capture failures and optionally write errors to a custom object for later review, so the whole batch doesn’t fail on one bad record.

I’d also test with a smaller subset in a sandbox, monitoring limits via debug logs or Limits methods.

2. How do you prevent duplicate Opportunities from being created simultaneously by multiple users?

Duplicate prevention is critical, especially when users might click “Save” at the same time. I’d use a combination of techniques:

  • Unique fields with case‑sensitive uniqueness: If there’s a natural unique identifier (like an external ID or a combination of fields), mark it as unique in Salesforce. But that’s per‑record, not across transactions.
  • Before‑trigger duplicate check: In a before‑insert trigger, query existing Opportunities with the same key fields. If found, add an error. However, this can have race conditions if two users insert at the exact same millisecond.
  • Use Database.upsert with an external ID: If the source system provides an ID, upsert ensures you don’t create duplicates.
  • Asynchronous deduplication: For high‑volume scenarios, consider using a queueable job to process incoming records and merge duplicates, but that doesn’t prevent simultaneous creation.
  • Platform Event + CometD: If duplicates are rare, you could catch them post‑insert and merge, but that’s after the fact.
  • Create a Matching Rule that identifies duplicates (e.g., same Account + Name + Close Date). Then set the Duplicate Rule to Block or Alert. This is your first line of defense for normal use cases.
  • Database optimistic locking: You can use FOR UPDATE in a SOQL query to lock records, but that only works for updates, not inserts.
trigger OpportunityDuplicateCheck on Opportunity (before insert) {
  for (Opportunity opp : Trigger.new) {
    List<Opportunity> dupes = [SELECT Id FROM Opportunity
      WHERE Name = :opp.Name AND AccountId = :opp.AccountId
      AND CloseDate = :opp.CloseDate FOR UPDATE];
    if (!dupes.isEmpty()) {
      opp.addError('A similar Opportunity already exists.');
    }

3. A trigger is causing recursive execution. How do you fix it?

Recursive triggers happen when a trigger performs DML that fires the same trigger again. To fix it:

  • Use a static Boolean flag: In a helper class, declare a static variable like static Boolean alreadyExecuted = false;. At the start of the trigger, check if it’s true; if not, set it true and run the logic. This prevents re‑entry across the same transaction.
  • Be careful with multiple trigger contexts: If you have separate triggers for before/after insert/update, you might need different flags or a more nuanced approach.
  • Best practice: Use a trigger handler class. In the handler, you can have a static map to track which records have been processed, or simply a static Boolean for the entire operation.
  • Also, design your logic to avoid unnecessary DML: For example, if you’re updating a field on the same record, consider doing it in a before trigger to avoid an extra update.
public class AccountTriggerHandler {
    public static Boolean isTriggerExecuted = false;
    
    public static void handleAfterUpdate(List<Account> newList) {
        if (isTriggerExecuted) return;
        isTriggerExecuted = true;
        // your logic here
    }
}

4. A callout to an external API must roll back if it fails. How do you architect this?

  • Option 1: Use AllOrNone with a savepoint – Perform DML first, then callout. If callout fails, rollback to savepoint. But this consumes DML statements even if rolled back.
  • Option 2: Invert the order – Do the callout first, then if successful, do DML. This is safer because you don’t commit data until you know the external system succeeded.
  • Option 3: Use Platform Events and a reliable messaging pattern – Insert a platform event with the data, then have a separate process (e.g., a trigger on the event) make the callout. If the callout fails, you can retry or mark the event for reprocessing. This gives you an audit trail and eventual consistency.

For true rollback, consider using two‑phase commit (not natively supported). Often, the business accepts eventual consistency. I’d discuss with stakeholders: is it critical that Salesforce and external system are perfectly in sync? If yes, we might need a compensating transaction.

5. A nightly scheduled Apex job is silently failing. How do you investigate and make it more resilient?


Silent failures are nasty. I’d start by checking:

  • Apex Jobs UI: Setup → Apex Jobs to see if the job ran, its status, and any error messages.
  • Debug logs: Enable logging for the job’s user and time window.
  • Email notifications: Ensure the job’s class implements Database.AllowsCallouts and includes error‑handling that sends emails or creates log records.
  • Custom logging: In the batch or schedulable, wrap the logic in try‑catch and write errors to a custom object (e.g., Error_Log__c) with stack trace, timestamp, and record IDs.
  • Monitor limits: Use Limits methods to log if any governor limits are approached.
  • Make it resilient:
    • In Batchable, override finish method to send a summary email.
    • Use Database.executeBatch with a scope size that fits within limits.
    • For callouts, implement retries with exponential backoff.
    • Schedule a monitoring job that checks for missing runs or recent errors.

Also, consider using Platform Events to publish job status, which can be monitored externally.

6. You need to auto-create a Task every time a new Lead is inserted. Where do you implement this?

The best place is a Lead trigger (after insert). But we must be careful about recursion and bulkification.

  • Use a trigger handler pattern.
  • In the after insert context, loop through the new Leads and build a list of Tasks.
  • Insert the Tasks in bulk (one DML operation).
  • Ensure the Trigger handler has a static flag to prevent recursion if the Task creation somehow triggers another Lead update (unlikely unless there’s a workflow that updates Lead).

Alternatively, you could use Process Builder or Flow – they are declarative and easier to maintain. But if there’s complex logic, Apex gives more control. I’d choose based on the client’s preference and the complexity.

If using Flow, a record‑triggered flow on Lead creation can create the Task. That’s often the simplest and most maintainable.

trigger LeadTrigger on Lead (after insert) {
  List<Task> tasks = new List<Task>();
  for (Lead l : Trigger.new) {
    tasks.add(new Task(
      Subject = 'Follow up with new Lead',
      WhoId = l.Id,
      OwnerId = l.OwnerId,
      ActivityDate = Date.today().addDays(1),
      Status = 'Not Started'
    ));
  }
  insert tasks;
}

7. How do you implement a retry mechanism for a Queueable Apex job that calls an intermittently failing external service?

Queueable jobs can be chained, so we can implement retries by re‑enqueuing the job on failure. Here’s a pattern:

  • In the execute() method, wrap the callout in a try‑catch.
  • If it fails, check a custom retry count field on the job data (or store in a custom object).
  • If retries remain, enqueue a new instance of the same class with the same data but incremented retry count.
  • Use System.enqueueJob  with a delay (by scheduling a one‑time job) if you need backoff. For immediate retry, just enqueue again – but beware of hitting queue limits.
  • Log each failure for monitoring.
public class RetryableCallout implements Queueable, Database.AllowsCallouts {
  private Integer retryCount;
  private static final Integer MAX_RETRIES = 3;
  private String recordId;

  public RetryableCallout(String recordId, Integer retryCount) {
    this.recordId = recordId;
    this.retryCount = retryCount;
  }

  public void execute(QueueableContext ctx) {
    try {
      HttpResponse res = makeCallout(recordId);
      if (res.getStatusCode() != 200) throw new CalloutException('API failed');
      // success logic
    } catch (Exception e) {
      if (retryCount < MAX_RETRIES) {
        System.enqueueJob(new RetryableCallout(recordId, retryCount + 1));
      } else {
        // Max retries hit — log and alert
        insert new Error_Log__c(Message__c = 'Max retries exceeded: ' + e.getMessage());
      }
    }
  }
}

  • Don’t retry instantly — add a delay pattern. Use a Scheduled Job triggered from the catch block with a delay (e.g., 1 min, 5 min, 15 min) to avoid hammering the external system.
  • When max retries are exceeded, write to a ‘Failed_Integration__c’ custom object. A separate scheduled job or manual process can re-trigger these. This gives you visibility and manual recovery options.

8. A LWC component must react to data changes made by other components on the same page. How do you implement this?


LWC components are isolated by design. To make them talk to each other, you need a shared communication mechanism. Your choice depends on whether components have a parent-child relationship or are siblings on the same page.

Option 1 — Lightning Message Service (LMS) — Best for Unrelated Components
LMS lets any component on the page subscribe to a named message channel, regardless of DOM hierarchy. This is the Salesforce-recommended approach for sibling or cross-DOM communication:

// Publisher component
import { publish, MessageContext } from 'lightning/messageService';
import RECORD_UPDATED_CHANNEL from '@salesforce/messageChannel/RecordUpdated__c';

@wire(MessageContext) messageContext;

handleUpdate() {
  publish(this.messageContext, RECORD_UPDATED_CHANNEL, { recordId: this.recordId });
}

// Subscriber component
import { subscribe, MessageContext } from 'lightning/messageService';

connectedCallback() {
  this.subscription = subscribe(this.messageContext, RECORD_UPDATED_CHANNEL,
    (message) => this.handleMessage(message));
}


Option 2 — Custom Events for Parent-Child
For parent-child relationships, fire a custom event from the child and handle it in the parent. The parent can then pass updated data down via @api properties.

Option 3 — Shared Wire Adapter + refreshApex
If all components display the same record data, wire them to the same record using @wire. After a mutation, call refreshApex() and all wired components will automatically re-render with fresh data.

9. You’re building a multi-step wizard in LWC where users can go back and forward without losing data. How do you architect it?

A multi‑step wizard requires preserving state across steps. I’d architect it like this:

  • Use a single LWC component that manages the wizard’s state. The state is a JavaScript object containing all the data entered so far.
  • Conditionally render different sections (steps) based on a currentStep property.
  • Use @track for the state to make it reactive.
  • On each step, bind input fields to the state properties.
  • For navigation, update  currentStep – data persists because it’s in the state object.
  • At the final step, perform the submission (e.g., DML) using all accumulated data.
  • Optionally, use sessionStorage or localStorage to persist state across page reloads, if needed.

For complex wizards, you might break each step into a child component that receives and fires events to update the parent’s state. That keeps each step focused.

<template if:true={step1}>
    <c-step-one data={wizardData} onnext={handleNext}></c-step-one>
</template>
<template if:true={step2}>
    <c-step-two data={wizardData} onprevious={handlePrevious} onnext={handleNext}></c-step-two>
</template>

10. LWC users on slow connections experience a jarring experience during Apex calls. How do you improve this?

To improve UX on slow connections:

  • Show loading indicators: Use lightning-spinner or custom loading spinners while waiting for Apex.
  • Disable buttons during callout to prevent double‑submission.
  • Use @wire with cacheable=true when possible – it caches data and can serve stale data quickly while refreshing in background.
  • Implement optimistic UI: For non‑critical operations, update the UI immediately and then reconcile with server response. For example, when toggling a checkbox, reflect the change instantly; if the server update fails, roll back with a toast message.
  • Use client‑side storage (localStorage/sessionStorage) to cache frequently used reference data (like picklist values) so you don’t fetch them every time.
  • Batch multiple requests into one Apex call to reduce round trips.
  • Consider using platform events for real‑time updates without polling, but that’s more for streaming.

Also, communicate clearly to the user with toasts or status messages, so they know something is happening.

This is a common requirement. In Lightning Experience, we can’t easily add custom buttons to related lists that open modals with standard functionality. But we have options:

  • Create a custom LWC or Aura component that replaces the related list or adds a button column. Inside the component, you can use lightning-record-edit-form or lightning-record-form in a modal (using lightning-modal or a custom modal overlay).
  • Use a quick action with a screen flow or LWC – quick actions can be added to related lists (though the UI is a bit limited). The action can open a modal with a flow to edit the record.
  • Use a custom button that calls a Lightning page – but that typically navigates away.

I’d propose building a custom related list LWC that displays the records and includes an “Edit” button for each row. Clicking the button opens a modal with a form bound to that record. After saving, refresh the list. This gives full control and a smooth experience.

12. You need to display a real-time feed of field service updates on a Custom dashboard without polling.

Real‑time updates without polling typically use Streaming API (PushTopic, Platform Event, or Change Data Capture). For a dashboard, I’d:

  • Use Platform Events for custom updates. When a field service record changes, publish a platform event.
  • On the dashboard (which is an LWC), subscribe to the platform event using the empApi component (or the lightning/empApi module).
  • When an event is received, update the component’s data reactively – perhaps refresh a wire adapter or just update a tracked property.
  • Alternatively, use Change Data Capture for standard object changes. Subscribe to the channel /data/ChangeEvents and filter for the object.

This gives real‑time push without polling. The dashboard component must be open in the user’s browser to receive events. For an always‑on dashboard, that’s fine.

13. A SOQL query on a 2-million-record custom object is timing out. Walk through your optimization approach.

A SOQL query timing out usually means it’s not selective enough or is doing too much work. Here’s my step‑by‑step optimization:

  1. Check query selectivity: Salesforce requires that queries be selective to avoid full table scans. I’d ensure the WHERE clause uses indexed fields (Id, Name, standard fields, custom fields marked as External ID or Unique, or fields with a filter that returns <10% of records). If not, add an index (via support) or redesign.
  2. Use selective filters: Add filters on indexed fields like CreatedDate, RecordType, or a lookup. For date ranges, use bounded ranges.
  3. Avoid SELECT * (FIELDS(ALL)): Only query fields you actually need.
  4. Use skinny tables if available (for very large objects).
  5. Consider using a summary or aggregate: If you only need counts, use COUNT().
  6. If query is in a batch job, use Database.getQueryLocator which handles large datasets efficiently.
  7. If it’s a real‑time query, maybe move the logic to a batch job or use a custom index.
  8. Partition data: If records can be partitioned by something (like Region or Year), use separate queries.
  9. Use SOQL for loops to process records in chunks of 200, but that doesn’t solve timeout – it just prevents heap issues.

Finally, I’d use the Query Plan tool in Developer Console to see if the query uses indexes.

The most efficient way is to use a LEFT OUTER JOIN with a condition on the child, but SOQL doesn’t support subqueries with date filters directly in the main query’s WHERE. However, we can do:

SELECT Id, Name
FROM Account
WHERE Id NOT IN (
    SELECT AccountId
    FROM Opportunity
    WHERE CloseDate >= LAST_N_DAYS:90
)

But NOT IN can be slow if the subquery returns many rows. A better approach might be:

SELECT Id, Name
FROM Account
WHERE Id NOT IN (
    SELECT AccountId
    FROM Opportunity
    WHERE CloseDate >= LAST_N_DAYS:90 AND AccountId != null
)

If there are millions of Accounts, this might still time out. An alternative is to use a rollup summary on Account that counts Opportunities in the last 90 days (if CloseDate is used, you might need a formula or a scheduled batch to update a custom field). Then query Account WHERE Opportunities_Last_90_Days__c = 0. That’s pre‑aggregated and super fast.

If rollup isn’t possible, consider using a report or Analytics API.

For a one‑time query, the NOT IN with a date filter is acceptable but test performance.

SOQL has a limit of 5 levels of parent‑child relationships in a single query. To get data from 5 related objects, you might need to:

  • Restructure the query: Instead of one deep query, break it into multiple queries and assemble in Apex. For example, query the main object, then query child objects separately, and map them in code.
  • Use relationship queries carefully: You can go up to 5 levels of parent (child-to-parent) using dot notation, or child-to-parent-to-child etc. But mixing both quickly hits the limit.
  • Consider using Schema.getGlobalDescribe() to understand relationships.
  • If the data is for reporting, maybe use Report Types and let Salesforce handle it, or use Analytics API.
  • Create a custom index or denormalized fields if you frequently need this data.
  • Use Platform Events or Big Objects if it’s massive.

In practice, I’d likely write a batch job that collects all needed data into a custom object for reporting, updating it periodically.

16. A team member wrote a SOQL query inside a for loop. How do you explain and fix it?

First, I’d explain the problem: Putting a SOQL query inside a loop can quickly hit governor limits because each iteration executes a query, and for 200 records, that’s 200 queries (limit is 100). It also degrades performance.

How to fix: Move the query outside the loop, gather all necessary IDs, and query in bulk. For example, instead of:

for (Account a : accounts) {
    List<Contact> cons = [SELECT Id FROM Contact WHERE AccountId = :a.Id];
    // do something
}

Refactor

Set<Id> accIds = new Set<Id>();
for (Account a : accounts) {
    accIds.add(a.Id);
}
Map<Id, List<Contact>> contactsByAccount = new Map<Id, List<Contact>>();
for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accIds]) {
    if (!contactsByAccount.containsKey(c.AccountId)) {
        contactsByAccount.put(c.AccountId, new List<Contact>());
    }
    contactsByAccount.get(c.AccountId).add(c);
}
for (Account a : accounts) {
    List<Contact> relatedContacts = contactsByAccount.get(a.Id);
    // process
}

This uses one query and one loop. I’d also mention using maps for efficient lookups.

17. An external ERP sends thousands of order records every hour via REST. How do you design a reliable integration?

For high‑volume hourly integration, reliability is key. I’d design:

  • Use REST API with bulk API 2.0 if possible – it’s designed for large data volumes and handles async processing.
  • Implement idempotency: The ERP should include a unique message ID or order ID so we can deduplicate. Use upsert with external ID.
  • Use a queueing mechanism: Receive the data, store it temporarily (e.g., in a custom object or platform event), then process asynchronously via batch or queueable. This decouples ingestion from processing and prevents timeouts.
  • Implement error handling and retries: If processing fails, move records to an error queue with retry logic.
  • Monitor with custom metrics: Log counts, durations, and errors.
  • Use a middleware (like MuleSoft or Workato) if the ERP can’t handle Salesforce’s limits.
  • Secure with Named Credentials and OAuth.
  • Consider using Composite API if many related records need to be created together.

For example, ERP POSTs to a Salesforce REST endpoint. The endpoint inserts a custom object Inbound_Order__c and enqueues a queueable job to process it. The queueable does upserts and reports back.

18. A Salesforce outbound message is not being received by the external endpoint. How do you debug it?

Outbound messages are configured in Workflow Rules or Process Builder. To debug:

  • Check the Outbound Message setup: Verify the endpoint URL, user credentials, and that it’s active.
  • Look at the Outbound Message queue: Setup → Monitoring → Outbound Messages. See if messages are queued, sent, or failed. If failed, there’s often an error message.
  • Check the external system logs: Ensure the endpoint is up and that requests are reaching it. Sometimes firewalls block Salesforce IP ranges.
  • Use a tool like RequestBin or webhook.site temporarily to capture the payload and verify structure.
  • Enable debug logs for the workflow user (the user configured in the outbound message). Look for Workflow: send log entries.
  • Check SSL certificates: If the endpoint uses self‑signed certs, Salesforce might reject it. Use a proper CA‑signed certificate.
  • Review timeouts: If the endpoint is slow, the message might time out. Ensure it responds within Salesforce’s timeout (usually 10 seconds for sync, 5 minutes for async? Outbound messages are async but still have a timeout).

If nothing shows, consider using a Platform Event and an external subscriber as a more reliable alternative.

19. You need bidirectional sync between Salesforce and an external CRM without causing infinite loops.

Bidirectional sync is tricky because each system can update the other, creating a loop. Solutions:

  • Use a “source” field: In Salesforce, add a field like Last_Sync_Source__c that indicates which system made the last change. When updating from external, set that field to “External”. In the trigger, if the source is “External”, skip sending the update back.
  • Use a sync timestamp: Only sync records modified after the last sync time, and include that timestamp in the sync message. The receiving system updates only if its local version is older.
  • Use a message queue with deduplication: When a change occurs, publish an event with a unique ID. The subscriber checks if it has already processed that event.
  • Implement a “sync direction” flag for each field: Some fields sync only one way.
  • Use a middleware like MuleSoft that manages the sync state and prevents loops.
  • For near real‑time, use Platform Events with a unique event ID and have the external system ignore events it originated.

The key is to tag each change with its origin and ignore updates that originated from the other system.

20. A Named Credential is configured but callouts return 401 errors in production only.


A 401 (Unauthorized) in production only suggests an environment‑specific issue:

  • Check the Named Credential’s authentication settings: In production, the Principal might be set to “Named Principal” with a specific user, and that user’s password may have expired or changed. Or the certificate might be different.
  • Verify the endpoint URL: Is it pointing to a production environment of the external system that expects different credentials?
  • Check if the Named Credential uses OAuth 2.0 and the access token has expired. In production, maybe the refresh token flow is broken because the callback URL is different.
  • Review the external system’s logs: See if requests are reaching and what user is being presented.
  • Test in a sandbox with a similar configuration: Compare with production to spot differences.
  • Use HttpRequest directly with the Named Credential endpoint and log the response headers (with proper caution). You can temporarily add debug to see the full response.
  • Check if the production org has IP restrictions that block the external system’s responses? Unlikely, but check network access.

Often, the issue is a credential that needs to be re‑authorized or a certificate that wasn’t uploaded to production.

21. A user can see a button but gets an error when saving. Their profile has edit access. What do you investigate?

Seeing the button means the UI permissions are fine (e.g., the button is visible). Save error despite edit access suggests a deeper issue:

  • Field‑level security (FLS): The user might have edit access on the object but not on a particular field that’s required or being updated. Check if any fields on the layout are required but the user can’t edit them.
  • Validation rules: The data might trigger a validation rule that the user can’t bypass.
  • Trigger or workflow errors: Something in automation might be failing, maybe due to a permission like “Modify All Data” missing for a certain operation (e.g., updating a record they own but the trigger tries to update a record they don’t own).
  • Sharing rules: The user might not have write access to the specific record because of sharing settings, even though profile says edit. Check the record’s owner and sharing.
  • Apex sharing reasons: If the code uses with sharing, it might restrict access.
  • Look at debug logs for the user’s session to see exactly what DML fails and why.
  • Check if the button is a custom button that calls Apex – maybe the Apex code is failing due to a null pointer or governor limit.

I’d replicate the scenario in a sandbox with the same user profile and data to isolate.

22. You’re building a partner portal where users should only see their own company’s data. How do you enforce this?

Partner portals (now Experience Cloud sites with partner licenses) require data isolation. I’d enforce via:

  • Use Account Sharing based on Partner Account: Set up a sharing rule that shares records with the partner’s account and all its contacts (portal users). Typically, you create a partner community and assign each user to an Account. Then, use OWD to keep data private, and create sharing sets to grant access based on the user’s Account.
  • Manual sharing using Apex if complex logic is needed, but sharing sets are declarative.
  • Use criteria‑based sharing rules to share records where a lookup field matches the user’s Account.
  • In Apex, always use with sharing in classes that query data for portal users to enforce the sharing rules.
  • For custom objects, ensure the “Grant Access Using Hierarchies” is appropriate and that the portal users are in the role hierarchy under the partner account? Actually, portal users don’t typically appear in role hierarchy. So sharing sets are the way.
  • Test thoroughly: Log in as a portal user and verify they see only their own company’s data.

If using Person Accounts, it’s similar but with a different object model.

23. An Apex class is returning records the running user shouldn’t see. How do you identify and fix it?

If a class returns records the user shouldn’t see, it’s likely the class is declared without sharing or omits sharing. The fix:

  • Identify the class: Check its definition. If it’s without sharing, it bypasses sharing rules. Change to with sharing if the logic should respect user permissions.
  • If the class must be without sharing (e.g., for admin operations), then the method that returns data to a user should be in a separate with sharing context. You can call a with sharing class from a without sharing one.
  • Use Security.stripInaccessible to filter fields that the user can’t see, but that doesn’t filter rows. For row‑level security, you need sharing.
  • Review SOQL queries: Are they using WITH SECURITY_ENFORCED? That only checks FLS, not sharing.
  • Test with a user that has restricted access and run the class in an anonymous block to see what they get.

To identify the culprit, I’d check the class’s sharing declaration and the sharing rules for the object. If the class must be without sharing for internal reasons but exposed via a REST service, I’d ensure the service layer enforces sharing by querying in a with sharing context.

24. Your AppExchange package fails a security review due to SOQL injection. How do you find and fix all instances?

SOQL injection happens when dynamic SOQL uses unsanitized user input. To find and fix:

  • Scan the codebase for any Database.query() or System.query() with string concatenation. Use IDE search or a tool like Checkmarx.
  • Look for variables directly embedded in SOQL strings without String.escapeSingleQuotes().
  • Use static analysis tools (like PMD with Salesforce rules) to identify vulnerabilities.
  • Fix each instance:
    • Prefer static SOQL with bind variables whenever possible (e.g., [SELECT Id FROM Account WHERE Name = :userInput]). Bind variables are safe.
    • If dynamic SOQL is necessary, use String.escapeSingleQuotes() on any user‑supplied strings.
    • For dynamic WHERE clauses, consider using a whitelist of allowed field names.
  • Implement a controller layer that validates and sanitizes input before passing to the query.
  • Write unit tests that attempt injection and verify they are blocked.
  • Educate the team on secure coding practices.

After fixes, resubmit for security review.

25. Sandbox deployment succeeds but production fails due to low test coverage. How do you handle this?

This indicates that the test coverage in production is lower than in the sandbox, possibly because production has more metadata (unrelated classes) that drag down overall coverage, or the tests that passed in sandbox don’t cover all lines when run against production data.

Steps:

  • Check the test coverage report in production after a failed deployment. It shows which classes are under‑covered.
  • If the low coverage is in classes unrelated to your deployment, you may need to increase overall coverage. But Salesforce’s deployment only requires that your own classes meet the minimum (75% coverage). However, if you’re deploying to a production org with existing low coverage, the deployment might still fail because the overall org coverage drops below 75% after adding new code. Actually, the rule is: each class you deploy must have at least 75% coverage, and the overall org coverage after deployment must also be at least 75%. If existing classes have low coverage, adding new code might tip it below 75%.
  • Solution: Write additional tests for existing low‑coverage classes (if you have access) to raise overall coverage. Or, if you only deploy a subset, you can ask for a waiver if the low coverage is in unmodified managed packages? Usually, you have to fix it.
  • Use a pre‑deployment validation with RunLocalTests to see coverage impact.
  • In the deployment, you can specify RunSpecifiedTests to only run tests for your classes, but overall coverage still matters.
  • Best practice: Maintain high test coverage in all environments, and run all tests before any production deployment.

26. You need to deploy a metadata change alongside a data migration with zero downtime. How do you plan this?

Zero‑downtime deployment means users can continue working while changes are applied. I’d use a phased approach:

  1. Backward‑compatible changes first: If adding a new field, deploy it without making it required. Write code that can handle both old and new data.
  2. Use a data migration tool (like Data Loader, Workbench, or an Apex batch) to populate the new field on existing records. This can run after the metadata deployment.
  3. For schema changes that are breaking (e.g., renaming a field), use a multi‑step process:
    • Add new field, deploy code that writes to both old and new fields.
    • Backfill data.
    • Change code to read from new field.
    • Finally, remove old field.
  4. Use Platform Events to queue data changes so they can be processed asynchronously without blocking users.
  5. Coordinate with users: Schedule the data migration during low‑usage hours, even if the metadata deploy happens during the day.
  6. Leverage Change Sets or DX for deployment, and have a rollback plan.
  7. Test the entire process in a sandbox first.

The key is to avoid locking records or forcing users to wait. Use asynchronous processing for data migration.

27. Three developers are working in parallel sandboxes. How do you merge their work and avoid conflicts?

To manage parallel development, use a version control system like Git with a branching strategy:

  • Each developer works in their own sandbox and commits changes to a feature branch in Git.
  • Use Salesforce DX to convert metadata to source format, making it easier to merge.
  • Regularly integrate: Developers should pull the latest from the main branch into their feature branches to catch conflicts early.
  • Use a CI/CD pipeline (like Jenkins or GitHub Actions) to validate and deploy to a integration sandbox after merging.
  • For metadata conflicts (e.g., two developers modified the same file), Git will flag them. The team must manually resolve by discussing the changes.
  • Use unlocked packages or change sets for deployment, but source control is essential.
  • Communicate frequently about which components each is working on.

If using traditional change sets, it’s harder to merge. I’d advocate moving to DX and Git.

28. You’ve inherited 12 conflicting triggers on the Opportunity object. How do you refactor them?

Having 12 triggers on one object is a recipe for disaster. I’d refactor by:

  1. Consolidate into a single trigger using a trigger handler framework (like the one from FF/Liberty or a custom one). This trigger calls a handler class that delegates to different logic classes based on context.
  2. Analyze each existing trigger to understand its purpose. Document what each does, when it runs, and any dependencies.
  3. Create a unified order of execution: Determine the correct sequence (e.g., before insert, after insert, before update, etc.) and combine logic in the handler. Use separate methods for each piece.
  4. Ensure no duplication: If two triggers did the same thing, merge them.
  5. Use static flags to prevent recursion if triggers call each other.
  6. Write comprehensive tests covering all scenarios.
  7. Deploy the new single trigger and remove the old ones.
trigger OpportunityTrigger on Opportunity (before insert, after insert, before update, after update) {
    OpportunityTriggerHandler.handle();
}

Handler uses a switch on Trigger.operationType and calls relevant methods.

29. A complex discount calculation must run from a trigger, a batch job, and an API endpoint. How do you avoid duplicate logic?

To avoid duplication, encapsulate the logic in a service class that can be called from anywhere. For example:

public class DiscountCalculator {
    public static Decimal calculateDiscount(Opportunity opp) {
        // complex logic
        return discount;
    }
}
  • In trigger, call DiscountCalculator.calculateDiscount(opp) and update the field.
  • In batch job, iterate over records and call the same method.
  • In API endpoint (e.g., a REST service), call it as well.

If the logic requires DML (like updating related records), the service method can accept a list and perform bulk DML. Ensure it’s stateless and reusable.

Also, consider using inversion of control or a factory if the logic varies by context, but the core calculation stays the same.

30. Your org is approaching storage limits. Records older than 7 years must be archived but remain queryable. How do you architect this?

To archive old records while keeping them queryable, I’d consider:

  • Big Objects: Salesforce Big Objects are designed for large volumes of data that are rarely updated but need to be querable. They have lower storage cost and can store billions of records. I’d move older records to a Big Object.
  • External data storage: Use an external database (like AWS S3 + Athena) and access via Salesforce Connect (ODBC) or custom API calls. But that adds complexity.
  • Custom object with record types? Not ideal because storage isn’t reduced.
  • Process: Create a batch job that runs monthly, querying records older than 7 years, and inserts them into a Big Object (or external store). Then delete them from the standard object. Ensure you maintain relationships by including parent IDs.
  • For querying: Build a custom LWC or Apex that can query both current and archived data, perhaps by checking a flag or using a union approach. With Big Objects, you can query directly via SOQL (with some limitations).
  • Compliance: Ensure you meet legal retention policies.

This approach keeps primary storage lean while still allowing access to historical data when needed.

Author

  • Satyam parasa

    Satyam Parasa is a Salesforce and Mobile application developer. Passionate about learning new technologies, he is the founder of Flutterant.com, where he shares his knowledge and insights.

    View all posts

Leave a Comment